Moonwalk by R. Suresh Babu

AI tinkering lab
all the young student scientists
perform a moonwalking jig

by R. Suresh Babu

My haiku are centered on my experiences as a teacher where I observe children’s behaviour in the classroom situations, science labs and the school campus.

Science is fun. Children do funny acts. This sciku is based on moonwalking in an AI Tinkering Lab when children try to match their dance steps with a steering robot.

Further reading:

What is Tinkering Lab and Why Do We Need It? https://tinker.ly/tinkering-lab-what-is-it-and-why-do-we-need-it/

Artificial Intelligence (Wikipedia) https://en.wikipedia.org/wiki/Artificial_intelligence

Author bio:

R. Suresh Babu is a graduate teacher of English and a teacher counsellor in a Government Residential School in India. He is an alumnus of the Regional Institute of Education, Mysuru in India. His works have been published in Cattails, Failed Haiku, Wales Haiku Journal, Akitsu, Presence, Under the Basho, Poetry Pea Journal and Podcast, The Asahi Shimbun, World Haiku Series, The Mamba, Kontinuum, Haikuniverse, Cold Moon Journal, Chrysanthemum, tsuri-dōrō and The Mainichi. He is a contributing writer to the anthology, We Will Not Be Silenced of the Indie Blu(e) Publishing. He has done the art works for the Haiku anthology Bull-Headed, edited by Corine Timmer. You can follow him on Twitter @sureshniranam

Read more sciku by R. Suresh Babu: ‘Climate Change’ and ‘Language’.

Not just a game

passing as human
the arrogance of AI
undoing itself

Artificial intelligence (AI) is becoming more prominent and powerful in modern life. Yet AI isn’t always infallible, sometimes in surprising ways.

The board game Go has long been regarded as the hardest strategy board game to solve for artificial intelligence programmes. Indeed, where AIs were able to beat human players at checkers and chess in the mid-nineties, human Go players held out for another two decades.

Eventually, however, even this hurdle was overcome. In 2016 the AI AlphaGo defeated 9 dan rank professional Go player Lee Se-dol and subsequently in 2017 the world no. 1 Ke Jie. Today, publicly available Go AI programmes are capable of playing at the highest human levels of the game.

The world of professional Go was changed irrevocably. Some professional players have chosen to leave the game, others seek to learn from AIs (leading to less interesting, homogenous styles of professional play), and Go teaching professionals have often found themselves supplanted by AIs. Lee Se-dol himself retired from professional play shortly afterwards saying “Even if I become the number one, there is an entity that cannot be defeated”.

Yet AIs are far from perfect, as recent findings suggest.

Wang et al. (2022) created a Go-playing programme to take on KataGo, the most powerful publicly available Go-playing system currently available. But rather than trying to develop an AI that plays Go better than KataGo, the researchers trained their AI to behave in unexpected ways. Adopting an ‘adversarial policy’, the programme tricks KataGo into thinking that its victory in the game is secure.

The result?

KataGo passes prematurely (believing it has already won) and loses when the scores are compared. It has a bigger territory but unlike its adversarial opponent its territory isn’t secured and so it doesn’t score. KataGo’s weakness is something any amateur player of Go would spot (indeed human players can easily defeat the adversarial programme).

The research demonstrates that the human-level behaviours displayed by AIs are frequently far from human in their reasoning. They may reach the same conclusions but their methods are very different. Ironically, despite this illustration of how bizarre AIs really are, the arrogance of feeling secure and coming undone as a result is all too human.

The implications for this extend far beyond the game Go. AI is increasingly found in many areas of modern life, from speech recognition to self-driving cars. As the researchers say: “These failures in Go AI systems are entertaining, but a similar failure in safety-critical systems such as automated financial trading or autonomous vehicles could have dire consequences.” The researchers call for improved AI training to create AI systems with “the high levels of reliability needed for safety-critical systems.”

Original research: https://doi.org/10.48550/arXiv.2211.00241

Dr Andrew Holmes is a former researcher in animal welfare and the founder and editor of The Sciku Project. You can follow him on Twitter here: @AndrewMHolmes.