Quantumku by James Penha

and soon haiku too
will wiggle syllables through
computer wormholes

By James Penha

“In an experiment that ticks most of the mystery boxes in modern physics, a group of researchers announced on Wednesday that they had simulated a pair of black holes in a quantum computer and sent a message between them through a shortcut in space-time called a wormhole… In their report, published Wednesday in Nature, the researchers described the result in measured words: ‘This work is a successful attempt at observing traversable wormhole dynamics in an experimental setting.'”

Quote from The New York Times article ‘Physicists Create ‘The Smallest, Crummiest Wormhole You Can Imagine’ from November 30, 2022.

Further reading:

https://www.nytimes.com/2022/11/30/science/physics-wormhole-quantum-computer.html

https://www.nature.com/articles/s41586-022-05424-3

Author bio:

Expat New Yorker James Penha  (he/him🌈) has lived for the past three decades in Indonesia. Nominated for Pushcart Prizes in fiction and poetry, his work is widely published in journals and anthologies. His newest chapbook of poems, American Daguerreotypes, is available for Kindle. His essays have appeared in The New York Daily News and The New York Times. Penha edits TheNewVerse.News, an online journal of current-events poetry. You can find out more about James’ poetry on his website https://jamespenha.com and catch up with him on Twitter @JamesPenha

Enjoyed James’ sciku? Check out more of his sciku here: ‘DNAncient’, ‘If A Tree Talks in a Forest’, and ‘Air-Gen-Ku’.

Moonwalk by R. Suresh Babu

AI tinkering lab
all the young student scientists
perform a moonwalking jig

by R. Suresh Babu

My haiku are centered on my experiences as a teacher where I observe children’s behaviour in the classroom situations, science labs and the school campus.

Science is fun. Children do funny acts. This sciku is based on moonwalking in an AI Tinkering Lab when children try to match their dance steps with a steering robot.

Further reading:

What is Tinkering Lab and Why Do We Need It? https://tinker.ly/tinkering-lab-what-is-it-and-why-do-we-need-it/

Artificial Intelligence (Wikipedia) https://en.wikipedia.org/wiki/Artificial_intelligence

Author bio:

R. Suresh Babu is a graduate teacher of English and a teacher counsellor in a Government Residential School in India. He is an alumnus of the Regional Institute of Education, Mysuru in India. His works have been published in Cattails, Failed Haiku, Wales Haiku Journal, Akitsu, Presence, Under the Basho, Poetry Pea Journal and Podcast, The Asahi Shimbun, World Haiku Series, The Mamba, Kontinuum, Haikuniverse, Cold Moon Journal, Chrysanthemum, tsuri-dƍrƍ and The Mainichi. He is a contributing writer to the anthology, We Will Not Be Silenced of the Indie Blu(e) Publishing. He has done the art works for the Haiku anthology Bull-Headed, edited by Corine Timmer. You can follow him on Twitter @sureshniranam

Read more sciku by R. Suresh Babu: ‘Climate Change’ and ‘Language’.

Not just a game

passing as human
the arrogance of AI
undoing itself

Artificial intelligence (AI) is becoming more prominent and powerful in modern life. Yet AI isn’t always infallible, sometimes in surprising ways.

The board game Go has long been regarded as the hardest strategy board game to solve for artificial intelligence programmes. Indeed, where AIs were able to beat human players at checkers and chess in the mid-nineties, human Go players held out for another two decades.

Eventually, however, even this hurdle was overcome. In 2016 the AI AlphaGo defeated 9 dan rank professional Go player Lee Se-dol and subsequently in 2017 the world no. 1 Ke Jie. Today, publicly available Go AI programmes are capable of playing at the highest human levels of the game.

The world of professional Go was changed irrevocably. Some professional players have chosen to leave the game, others seek to learn from AIs (leading to less interesting, homogenous styles of professional play), and Go teaching professionals have often found themselves supplanted by AIs. Lee Se-dol himself retired from professional play shortly afterwards saying “Even if I become the number one, there is an entity that cannot be defeated”.

Yet AIs are far from perfect, as recent findings suggest.

Wang et al. (2022) created a Go-playing programme to take on KataGo, the most powerful publicly available Go-playing system currently available. But rather than trying to develop an AI that plays Go better than KataGo, the researchers trained their AI to behave in unexpected ways. Adopting an ‘adversarial policy’, the programme tricks KataGo into thinking that its victory in the game is secure.

The result?

KataGo passes prematurely (believing it has already won) and loses when the scores are compared. It has a bigger territory but unlike its adversarial opponent its territory isn’t secured and so it doesn’t score. KataGo’s weakness is something any amateur player of Go would spot (indeed human players can easily defeat the adversarial programme).

The research demonstrates that the human-level behaviours displayed by AIs are frequently far from human in their reasoning. They may reach the same conclusions but their methods are very different. Ironically, despite this illustration of how bizarre AIs really are, the arrogance of feeling secure and coming undone as a result is all too human.

The implications for this extend far beyond the game Go. AI is increasingly found in many areas of modern life, from speech recognition to self-driving cars. As the researchers say: “These failures in Go AI systems are entertaining, but a similar failure in safety-critical systems such as automated financial trading or autonomous vehicles could have dire consequences.” The researchers call for improved AI training to create AI systems with “the high levels of reliability needed for safety-critical systems.”

Original research: https://doi.org/10.48550/arXiv.2211.00241

Dr Andrew Holmes is a former researcher in animal welfare and the founder and editor of The Sciku Project. You can follow him on Twitter here: @AndrewMHolmes.

Authorship

Who wrote Beowulf?
Look for stylistic changes –
a single author?

Beowulf is one of the most well-known examples of Old English literature and debate has raged over whether the poem was written by a single author or combined from multiple sources. New research by Neidorf et al (2019) lends support to the single author theory.

Beowulf survives in a single manuscript that has been dated to around AD 1000. Using a statistical approach called stylometry the researchers analysed features of the writing, comparing the poem’s metre, word choices, letter combinations and sense pauses – small pauses between clauses and sentences. They found no evidence for any major stylistic shifts across the poem suggesting that Beowulf is the work of a single author.

Original research: http://dx.doi.org/10.1038/s41562-019-0570-1

The answer is none by John Norwood

The answer is none

Optimized to none more black

There is no knapsack

Most of the three line poems I write are zen kƍans in the syllabic form of haiku. This one was inspired by the disappearance of some corporate swag, an empty knapsack, taken from my cubicle while I was out of the office. It manages to include a couple of pop culture references, a nod to Zen Buddhism, and a reference to the knapsack problem, a famous optimization problem in computer science that is NP-hard, that is, it cannot be solved in polynomial time.

John Norwood is a Mechanical Engineer working with Carbon, Inc. to revolutionize how things are made. His interests include old houses, yoga, baking, cryptography, and bluegrass music. You can follow him on Twitter under the handle @pryoga

Enjoyed this sciku? Check out some of John’s other work: Universal truth, God may be defined, With enough data, Rivers cut corners, and Squeamish ossifrage.

God may be defined by John Norwood

God may be defined

P not equal to NP

Thus unprovable

This poem is about cryptography, which makes reference to the famous unsolved P versus NP problem of computer science. Modern cryptographic techniques rely on problems that are hard to compute (NP, or non-polynomial time) yet easy to verify (Polynomial time). If it were provable that such problems don’t exist, then any cryptography could be easily cracked. Our security is reliant upon an un-provable state and the very nature of its un-provabiity is what makes it secure. This fits my personal definition of God as the unknowable and believe the power of faith is rooted in a healthy relationship with that which cannot be known.

The P versus NP problem was first independently formulated by Stephen Cook and Leonid Levin in 1971, although the underlying ideas were considered earlier by John Nash, Kurt Godel and John von Neumann. It is one of the 7 Millennium Problems identified by the Clay Mathematics Institute with a reward of $1 million for the first to propose a solution.

John Norwood is a Mechanical Engineer working with Carbon, Inc. to revolutionize how things are made. His interests include old houses, yoga, baking, cryptography, and bluegrass music. You can follow him on Twitter under the handle @pryoga

Enjoyed this sciku? Check out some of John’s other work: Universal truth, The answer is none, With enough data, Rivers cut corners, and Squeamish ossifrage.

With enough data by John Norwood

With enough data

A narrow majority

Becomes certainty

 

This sciku has to do with probability theory used in data science and machine learning. In a nutshell, the more data you have, the lower the uncertainty of your model and the smaller the bias needed to reliably predict an eventual outcome. This paper by Dolev et al (2010) geeks out on some of the technical details of purifying data and machine learning.

Original research: https://doi.org/10.1145/1953563.1953567

John Norwood is a Mechanical Engineer working with Carbon, Inc. to revolutionize how things are made. His interests include old houses, yoga, baking, cryptography, and bluegrass music. You can follow him on Twitter under the handle @pryoga

Enjoyed this sciku? Check out some of John’s other work: The answer is none, God may be defined, Rivers cut corners, and Squeamish ossifrage.

Low-rank Representation by Dr David Keyes

Vast sea of numbers,

Can you be described by few,

As bones define flesh?

 

Many data objects, such as matrices, which are traditionally described by providing a high precision number for every (row,column) entry, can be represented to usefully high precision by many fewer numbers. This so-called “data sparsity” holds for the mathematical descriptions of many physical and statistical phenomena whose effects or correlations decay smoothly with distance.

An apparently complex interaction or relationship is, with a special perspective, much simpler. In an extreme limit, we can lump the moon’s gravitational effect on the Earth by assuming that all of its distributed mass is concentrated at a single point. The potential to represent dense matrices by products of a small number of vectors (the number needed is called the “rank”) is analogous to this and leads to huge savings in memory and operations when manipulating such objects. The effect of the whole can be represented by a carefully defined abstraction. One version of this technique is called “skeletonization,” which suggests the Sciku above. For an example of this philosophy, see Yokota et al, 2014.

Original research:  https://doi.org/10.14529/jsfi140104

David Keyes directs the Extreme Computing Research Center at KAUST, where he was a founding dean in 2009. He inhabits the intersection of Mathematics, Computer Science, and applications, with a focus on colonizing emerging energy-efficient architectures for scientific computations. He is a Fellow of SIAM and AMS and has received the ACM Gordon Bell Prize and the IEEE Sidney Fernbach Award. As a lover of poetry, he is delighted to discover the Sciku community.

Enjoyed this sciku? Check out David’s other sciku: Algorithmic complexity.

Algorithmic Complexity by Dr David Keyes

It’s not the flop count

But the data location;

The paradigm shifts.

 

Until recently, the analysis of algorithms emphasized finding the minimum number of operations required to complete a task to a given precision – the algorithmic complexity. This is natural when the operations are both the most time-consuming steps of the computation and a reasonable proxy for all other costs, including data motion.

Today, floating point operations (“flops”) are much cheaper in time and in energy than moving the data into and out of the processing unit while memory hierarchies based on multilevel caches deliver operands at latencies and energy costs that vary by several orders of magnitude, depending upon where the freshest copies of the data are found. This situation results in the resurrection of old algorithms and the design of new ones that may do many more flops than previously “optimal” methods, but move less data back and forth from remote locations, and thus finish in less time, with smaller energy expenditure.

The author’s group has created several pieces of software that have displaced previous choices by optimizing memory transfers rather than flops. An example of a singular value decomposition that overcomes a flops handicap of as much as an order of magnitude is given in Sukkari et al (2016). For a community discussion of new paradigms on the path to exascale, see Dongarra et al (2011).

Original research:

Sukkari et al, 2016, https://doi.org/10.1145/2894747

Dongarra et al, 2011 https://doi.org/10.1177/1094342010391989

David Keyes directs the Extreme Computing Research Center at KAUST, where he was a founding dean in 2009. He inhabits the intersection of Mathematics, Computer Science, and applications, with a focus on colonizing emerging energy-efficient architectures for scientific computations. He is a Fellow of SIAM and AMS and has received the ACM Gordon Bell Prize and the IEEE Sidney Fernbach Award. As a lover of poetry, he is delighted to discover the Sciku community.

Enjoyed this sciku? Check out David’s other sciku: Low-rank representation.