Not just a game

passing as human
the arrogance of AI
undoing itself

Artificial intelligence (AI) is becoming more prominent and powerful in modern life. Yet AI isn’t always infallible, sometimes in surprising ways.

The board game Go has long been regarded as the hardest strategy board game to solve for artificial intelligence programmes. Indeed, where AIs were able to beat human players at checkers and chess in the mid-nineties, human Go players held out for another two decades.

Eventually, however, even this hurdle was overcome. In 2016 the AI AlphaGo defeated 9 dan rank professional Go player Lee Se-dol and subsequently in 2017 the world no. 1 Ke Jie. Today, publicly available Go AI programmes are capable of playing at the highest human levels of the game.

The world of professional Go was changed irrevocably. Some professional players have chosen to leave the game, others seek to learn from AIs (leading to less interesting, homogenous styles of professional play), and Go teaching professionals have often found themselves supplanted by AIs. Lee Se-dol himself retired from professional play shortly afterwards saying “Even if I become the number one, there is an entity that cannot be defeated”.

Yet AIs are far from perfect, as recent findings suggest.

Wang et al. (2022) created a Go-playing programme to take on KataGo, the most powerful publicly available Go-playing system currently available. But rather than trying to develop an AI that plays Go better than KataGo, the researchers trained their AI to behave in unexpected ways. Adopting an ‘adversarial policy’, the programme tricks KataGo into thinking that its victory in the game is secure.

The result?

KataGo passes prematurely (believing it has already won) and loses when the scores are compared. It has a bigger territory but unlike its adversarial opponent its territory isn’t secured and so it doesn’t score. KataGo’s weakness is something any amateur player of Go would spot (indeed human players can easily defeat the adversarial programme).

The research demonstrates that the human-level behaviours displayed by AIs are frequently far from human in their reasoning. They may reach the same conclusions but their methods are very different. Ironically, despite this illustration of how bizarre AIs really are, the arrogance of feeling secure and coming undone as a result is all too human.

The implications for this extend far beyond the game Go. AI is increasingly found in many areas of modern life, from speech recognition to self-driving cars. As the researchers say: “These failures in Go AI systems are entertaining, but a similar failure in safety-critical systems such as automated financial trading or autonomous vehicles could have dire consequences.” The researchers call for improved AI training to create AI systems with “the high levels of reliability needed for safety-critical systems.”

Original research: https://doi.org/10.48550/arXiv.2211.00241

Dr Andrew Holmes is a former researcher in animal welfare and the founder and editor of The Sciku Project. You can follow him on Twitter here: @AndrewMHolmes.

With enough data by John Norwood

With enough data

A narrow majority

Becomes certainty

 

This sciku has to do with probability theory used in data science and machine learning. In a nutshell, the more data you have, the lower the uncertainty of your model and the smaller the bias needed to reliably predict an eventual outcome. This paper by Dolev et al (2010) geeks out on some of the technical details of purifying data and machine learning.

Original research: https://doi.org/10.1145/1953563.1953567

John Norwood is a Mechanical Engineer working with Carbon, Inc. to revolutionize how things are made. His interests include old houses, yoga, baking, cryptography, and bluegrass music. You can follow him on Twitter under the handle @pryoga

Enjoyed this sciku? Check out some of John’s other work: The answer is none, God may be defined, Rivers cut corners, and Squeamish ossifrage.