Computer, Remind Me: Which One of Us Is the Human? by Ellen J Craft

daily alarm:
my AI assistant’s voice
grows more human than mine

by Ellen J Craft

This haiku is based on my experience of interacting with an AI virtual assistant for nearly 10 years. At first, the assistant’s voice sounded robotic, and the prosody was nearly non-existent. However, in recent years, I’ve noticed not only does the assistant’s voice contain prosody, but it also contains emotion such as humor and excitement. When I hear the assistant’s response to my half-asleep request to snooze each morning, I get a shiver as I realize that I’m the one who sounds like the robot. This has me questioning not only AI’s boundaries, but also what really defines humanity.

In researching this subject, I learned that as the technology used to make AI virtual assistants (or digital assistants) improves, the voices are indeed becoming more human-sounding. According to Meghan McDonough, this is the final frontier in synthetic speech: replicating not just what we say but how we say it.

In her article, “Artificial Intelligence is Now Shockingly Good at Sounding Human”, McDonough interviews Rupal Patel, who heads a research group at Northeastern University that studies speech prosody. Prosody refers to changes in pitch, loudness, and duration that are used to help convey intent through voice. Patel says, “Sometimes people think of it as the icing on the cake. You have the message, and now it’s how you modulate that message, but I really think it’s the scaffolding that gives meaning to the message itself.”

Further reading:
‘Artificial Intelligence Is Now Shockingly Good at Sounding Human’, 2020, McDonough, M., Scientific American, available: https://www.scientificamerican.com/video/artificial-intelligence-is-now-shockingly-good-at-sounding-human/

‘9 More Realistic AI Voices for Conversations Now Generally Available’, 2024, Ma, M., Azure AI Services Blog, Microsoft, available: https://techcommunity.microsoft.com/blog/azure-ai-services-blog/9-more-realistic-ai-voices-for-conversations-now-generally-available/4099471

Author bio:

Ellen J Craft is a former writer/editor and librarian and a current English Language Development instructional assistant at a public school in the Seattle metro area. Her works have been published in haiku journals such as cattails, The Heron’s Nest, Modern Haiku, Pan Haiku Review, and Shadow Pond Journal. You can follow her on Instagram at @ellenjcraft

Moonwalk by R. Suresh Babu

AI tinkering lab
all the young student scientists
perform a moonwalking jig

by R. Suresh Babu

My haiku are centered on my experiences as a teacher where I observe children’s behaviour in the classroom situations, science labs and the school campus.

Science is fun. Children do funny acts. This sciku is based on moonwalking in an AI Tinkering Lab when children try to match their dance steps with a steering robot.

Further reading:

What is Tinkering Lab and Why Do We Need It? https://tinker.ly/tinkering-lab-what-is-it-and-why-do-we-need-it/

Artificial Intelligence (Wikipedia) https://en.wikipedia.org/wiki/Artificial_intelligence

Author bio:

R. Suresh Babu is a graduate teacher of English and a teacher counsellor in a Government Residential School in India. He is an alumnus of the Regional Institute of Education, Mysuru in India. His works have been published in Cattails, Failed Haiku, Wales Haiku Journal, Akitsu, Presence, Under the Basho, Poetry Pea Journal and Podcast, The Asahi Shimbun, World Haiku Series, The Mamba, Kontinuum, Haikuniverse, Cold Moon Journal, Chrysanthemum, tsuri-dōrō and The Mainichi. He is a contributing writer to the anthology, We Will Not Be Silenced of the Indie Blu(e) Publishing. He has done the art works for the Haiku anthology Bull-Headed, edited by Corine Timmer. You can follow him on Twitter @sureshniranam

Read more sciku by R. Suresh Babu: ‘Climate Change’ and ‘Language’.

Not just a game

passing as human
the arrogance of AI
undoing itself

Artificial intelligence (AI) is becoming more prominent and powerful in modern life. Yet AI isn’t always infallible, sometimes in surprising ways.

The board game Go has long been regarded as the hardest strategy board game to solve for artificial intelligence programmes. Indeed, where AIs were able to beat human players at checkers and chess in the mid-nineties, human Go players held out for another two decades.

Eventually, however, even this hurdle was overcome. In 2016 the AI AlphaGo defeated 9 dan rank professional Go player Lee Se-dol and subsequently in 2017 the world no. 1 Ke Jie. Today, publicly available Go AI programmes are capable of playing at the highest human levels of the game.

The world of professional Go was changed irrevocably. Some professional players have chosen to leave the game, others seek to learn from AIs (leading to less interesting, homogenous styles of professional play), and Go teaching professionals have often found themselves supplanted by AIs. Lee Se-dol himself retired from professional play shortly afterwards saying “Even if I become the number one, there is an entity that cannot be defeated”.

Yet AIs are far from perfect, as recent findings suggest.

Wang et al. (2022) created a Go-playing programme to take on KataGo, the most powerful publicly available Go-playing system currently available. But rather than trying to develop an AI that plays Go better than KataGo, the researchers trained their AI to behave in unexpected ways. Adopting an ‘adversarial policy’, the programme tricks KataGo into thinking that its victory in the game is secure.

The result?

KataGo passes prematurely (believing it has already won) and loses when the scores are compared. It has a bigger territory but unlike its adversarial opponent its territory isn’t secured and so it doesn’t score. KataGo’s weakness is something any amateur player of Go would spot (indeed human players can easily defeat the adversarial programme).

The research demonstrates that the human-level behaviours displayed by AIs are frequently far from human in their reasoning. They may reach the same conclusions but their methods are very different. Ironically, despite this illustration of how bizarre AIs really are, the arrogance of feeling secure and coming undone as a result is all too human.

The implications for this extend far beyond the game Go. AI is increasingly found in many areas of modern life, from speech recognition to self-driving cars. As the researchers say: “These failures in Go AI systems are entertaining, but a similar failure in safety-critical systems such as automated financial trading or autonomous vehicles could have dire consequences.” The researchers call for improved AI training to create AI systems with “the high levels of reliability needed for safety-critical systems.”

Original research: https://doi.org/10.48550/arXiv.2211.00241

Dr Andrew Holmes is a former researcher in animal welfare and the founder and editor of The Sciku Project. You can follow him on Twitter here: @AndrewMHolmes.