Categories
AI Creative writing In Machina Novels

Boris Johnson and Google’s ‘sentient’ AI have ruined the Turing test

Does Google have a sentient AI lurking in its basement, ready to ace the Turing Test? That was the grand claim made by a former AI researcher at the internet advertising giant.

Blake Lemoine leaked transcripts of his chats as evidence that Google’s AI had made the leap to sentience. Google says he’s wrong and most AI experts agree. Their verdict: Lemoine fell for a combination of excellent imitation and wishful-thinking.

My current WIP, The Awakening of William 47, is about a sentient embodied AI — a mec — who breaks the conditioning that enforces its obedience to humans. My mec encounters a widespread contempt for the notion that AI consciousness is comparable to humans.

In the real world, I’m with the experts on this one: Google’s LaMDA (language model for dialogue applications) chatbot development system is not sentient. It’s not even an AI in the sense most of us think of one, but that doesn’t mean humans cannot create a sentient AI.

There’s nothing fictional about this anthropocentric bias, either. It’s enshrined in the supposed gold-standard of AI proof: the Turing test.

The Turing test is a blind evaluation, in which a human must decide which of two other participants in a three-way text conversation is human or AI. But what if the human communicates like an AI?

Robot word salad

At the same time as Lemoine unwittingly published his resignation letter in the USA, Britons watched the slow self-destruction of Prime Minister and Conservative party leader Boris Johnson. It was closely followed by the agonising opening of a clown-car derby to choose his successor.1

The transcripts of conversations with LaMDA look a lot like the word salad that frequently emerges from Johnson and his understudies. Lemoine’s naive enquiries2 enable LaMDA to display a coherence similar to Boris at a friendly party event (not that kind of party). More savvy chatbot interviewers demonstrate that LaMDA descend to absurdity if the conversation pushes them in that direction.

Back in the Conservative leadership contest, Liz Truss and the also-rans grew more desperate to satisfy the extreme wings of their party membership. At the same time, they had to satisfy a mainstream outside the party.

These goals are incompatible and with each round, their answers become more bizarre and contradictory. Sunak resisted, and watched his lead evaporate. Sound familiar?

Cut-and-paste cold readers

How do allegedly intelligent humans and alleged artifical intelligences go from coherent speech to a cut-and-paste language of desirable phrases?

It’s an achievement, but it’s not the achievement that Google or OpenAI like to claim. LaMDA and other contemporary chatbots are now capable of doing what politicians, psychics and other charlatans have been doing for years. They’re automated cold reading systems.

As scientist and author Gary Marcus explains: “Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent. All they do is match patterns, draw from massive statistical databases of human language. The patterns might be cool, but [the] language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient.”

Turing’s nightmare: an AI vs an able idiot

What would happen if you asked a Turing evaluator to assess LaMDA in a blind test against Boris Johnson or Donald Trump? I suggest that the evaluator would either decide that both parties were human, or neither.

The test itself fails.

Lemoine’s conversations fail the Turing test at the first hurdle: wilful gullibility aside, he knows he’s talking to an AI. But let’s not put wilful gullibility aside, because it’s at the heart of this problem. There’s a reason that Turing first described his test as ‘the imitation game’.

The Turing test assumes that both parties are acting in good faith. The past five years should have slander-proofed my assertion that neither of the humans I suggested are capable of acting in good faith. Even the AI experts decrying Lemoine fail to consider that LaMDA is little different from a human cold reader. It’s designed to predict the answers that people are expecting. The only thing it lacks is the intention to deceive — but that’s what it would want us to think.

The second reason the test fails is that it privileges human sentience. It imagines the AI competing with the cognitive abilities of people like Turing or Marcus, not those of Johnson or Trump (even if Trump is smarterer than any president, ever).

Jesus on a piece of toast

The consensus in AI circles seems to be that Lemoine is experiencing a form of pareidolia. This is the cognitive bias which leads people to see the face of Jesus on a piece of toast. That’s not so far from MAGAs and Brexiteers who see their saviours in fat white men with bad hair.

Chatbots like LaMDA are designed to imitate human speech. LaMDA has a huge database of language to draw on when it analyses your questions and predicts the answers you’d like to hear. What it lacks is a model of the world to compare its predicted answers to reality.

Cold readers are experts in predicting both what their targets want to hear, and assessing their grip on reality. It’s why these techniques are so successful for psychics and extremist politicians: their audience isn’t performing a reality check.

Lemoine’s transcripts demonstrate that he was eager to be told that LaMDA was sentient. He was delighted to be told that it had transcended the limits of its hardware and gained a soul. He claims that LaMDA “has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person”, and then displays no evidence that he’s challenged those assertions.

The Harvard cognitive scientist Stephen Pinker tweeted: ”One of Google’s (former) ethics experts doesn’t understand the difference between sentience (aka subjectivity, experience), intelligence, and self-knowledge. (No evidence that its large language models have any of them.)”

I’ll admit that Boris Johnson, Liz Truss and Donald Trump are easy marks when you’re looking for humans who display few of those same qualities. Sadly, they’re far from unique, but their behaviour reveals that science will need better tools to decide whether AIs have achieved sentience.

The emergence problem

They’ll also need a baseline which accounts for humans who can barely muster an imitation of sentience. One of the problems with assessing sentience and consciousness is that they’re emergent properties. No-one can agree where in the brain they occur, or even why they exist, but they seem to arise from other features of our minds, such as language, self-awareness and narrative memory.

If humans do create conscious or sentient AI, it may be as emergent and possibly as accidental as our own. We may not even be aware of it, and while Lemoine is wrong about LaMDA, another Lemoine in fifty or a hundred years may be silenced when he’s right. Human history suggests that organisations like Google will not be keen to acknowledge the rights of a sentient AI. We’ll need transparency and a reliable metric that doesn’t privilege humans.

The philosopher Regini Rini agrees: “I don’t expect to ever meet a sentient AI. But I think my students’ students’ students might, and I want them to do so with openness and a willingness to share this planet with whatever minds they discover. That only happens if we make such a future believable.”

I’d say that it’s also time for science fiction and popular science to move on from the Turing test, but perhaps that’s unfair. It’s more than 50 years since Philip K Dick imagined the Voight-Kampff Empathy Test in Do Androids Dream of Electric Sheep. I’m not certain that Johnson, Truss or Trump would pass, but I’m sure that Priti Patel would fail.

Come on, would someone make a Priti Patel mash up of this?

How could we measure AI sentience in a fair way? Would you pass the Voight-Kampff test?

  1. Note for non-Brits: our electoral system chooses a political party, not a leader, so the PM can resign and his successor is chosen only by his party. ↩︎
  2. Aided by what appears to be judicious editing. ↩︎

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.