Categories
In Machina Novels Psychology Science

Comparative Intelligence: a scale for cross-species sentience and AI

And pray that there’s intelligent life somewhere up in space… …’cos there’s bugger-all down here on Earth

Eric Idle, “Galaxy Song”, Monty Python’s The Meaning of Life

“AI” is one of the most over-used and misapplied terms in popular science, right after “quantum leap”. Animal intelligence is either embraced uncritically or dismissed prejudicially, rarely without bias.

Whether it’s artificial intelligence or animal intelligence, few people agree on what they’re talking about and how to compare them to human intelligence.

Tech companies and their evangelists are falling over each other to flog products that claim to use artificial intelligence. If you’ve used a Google Home device, you’ll know that they frequently fail to comprehend anything more than very simple questions or instructions.

Animal intelligence is becoming a mainstream field in psychology and the basis for a legal definition of animal rights, but it’s difficult to pin down in primates which share our body format and environment, let alone marine mammals.

As a science fiction writer, I’ve long wondered if there’s a way to create a single measure that would span living and artificial intelligence, both now and in the future.

The trouble with Turing

The problem in both fields is that intelligence is poorly-defined and often redefined to suit the situation.

Techies wave the Artificial Intelligence flag for very narrow demonstrations of expertise, such as natural language recognition or beating human experts in a new gaming environment.

Tool use and learning: just two aspects of intelligence (University of Georgia)

AI researchers (in comparison to tech companies) are usually more cautious in their definitions and claims of intelligence.

The Turing Test is often considered a gold standard for determining intelligence: the AI must convince a human that it is also human through a blind conversation. For all the notion of a “Turing-compliant AI”, Alan Turing’s thought experiment thought experiment has its limits. Imagine putting a disembodied Donald Trump or Kellyanne Conway to the test. Would you consider their word salad ramblings evidence of intelligence?

Animal intelligence studies point to tool use here, language there and social behaviours in another place. There’s nothing that lets us usefully them compare them to other kinds of intelligence, human, animal or machine.

I’d like to propose a multi-dimensional measure of comparative sentience that can be condensed into a useful measure of comparison in the same way that IQ has come to represent analytical intelligence. A successful tool will naturally be misconstrued, misapplied, demonised and abused as much as that scale, while still proving useful to those who understand it.

Here’s a basic map of my idea:

A mindmap of comparative intelligence.
How would you create a comparison of sentience in different creatures?

What are the components of Comparative Intelligence?

Far from being a philosophical question, if you’re going to measure something then you need to decide what it is. Depending on your definitions, our faculties break down into Sentience, Intellect, Sapience, Consciousness, Emotience and Language.

Sentience is the ability to sense the world around you. Intellect is how you process and use the information, including creativity. Sapience is the wisdom accrued from your experiences. Consciousness defines the ability to be aware of oneself and others, and to experience sensations such as pleasure and pain. Emotience (sometimes referred to as EQ) demonstrates how you understand and process your emotions, and those of others around you. Finally, language measures your ability to communicate your knowledge and experience with other intelligences.

Each dimensions embodies big questions about the human experience, but if you’re going to use them comparatively, there’s a further trap for the naive: anthropomorphising non-human subjects. It’s an accusation often levelled at primate researchers who have been in the field too long, and something that almost every pet owner does without thinking. The anthropic trap denies non-human intelligence its context and renders any comparison meaningless.

How can you use Comparative Intelligence?

The immediate use is to give the field of animal psychology or comparative psychology something to work around. Where do apes, whales, dogs or squid fit in comparison to ourselves, and what does that mean for their rights in a world dominated by humans?

The scale could be useful to lawyers, judges and lawmakers defining not only the rights of animals, but the limits of responsibility for so-called ‘smart’ systems and those who make them. It might help to redefine philosophical traps such as the Trolley Problem, which have been made real by the unregulated arrival of self-driving cars.

Beyond that, AI researchers could aim to create a 0.5 sentience, then maybe a 1.0 that’s the baseline for Turing compliance, and finally a 1.5 that can take over from our own leaders. Given the current standards of human leadership, a 0.75 would probably outclass most of the field.

As a writer, I’m working on a science fiction novel about artificial intelligence which is trying to ask questions about the rights of artificial intelligences. It seems all too likely that humans will cynically create human-equivalent AI which is unable to question its loyalty to humans. My In Machina series of novels endeavours to investigate the inevitable failure of such a system and have fun on the way, dodging at least a few of the tropes which dominate AI fiction.

What makes a good intelligence scale?

Bad scales deliver bad results. You only have to look at British and American politics to see the absurd knot that people tie themselves in by attempting to condense a two-dimensional scale into one dimension. Social liberals and Marxists are forced to one end, while authoritarians and economic liberals must share the other.

The IQ scale condenses a range of analytical skills into a single measure. However, it notably fails to assess creative skills and has frequently been accused of a bias towards Western culture. It’s a starting point, nonetheless. The EQ scale has also been employed to measure emotional intelligence, and is widely accepted.

Ideally, we would develop a scale where an average human sits at 1.0 or 100. Contemporary AIs would fall below that mean, along with most primates and birds, but we might discover that marine mammals spend a great deal of time talking about how stupid humans are.

Although this is an anthropocentric view, the scale must have to capacity for humanity to be excelled. While it’s most likely that this would be by an intelligence of our own creation*, it could be an extraterrestrial intelligence. We could be forced to reclassify ourselves when Antartica melts and reveals the glittering lost city of the dinosaurs (this is a joke — it was obviously built by the Great Old Ones).

What’s in the new scale of comparative intelligence?

Another challenge for this project is that measurable features of intelligence might not map discretely onto the components of comparative intelligence.

Environmental Awareness is a basic category, but we have to be aware that most animals experience the world through senses that are different to our own, whether they’re sharper or they sense features like electromagnetism or heat that we cannot comprehend. The same will almost certainly be true of AIs built into forms that can environments for which humans are poorly-suited, from the deep oceans to the depths of space.

Communication is also fundamental, and should encompass vocal and non-vocal communication. At what point does communication become language? If we’re going to include communications via light and chemical signals, we’ll need to compare the different channels through abstract measures like information density and complexity.

Problem-solving is another key category. It unlocks questions about intentionality and manipulating the environment through the use and manufacture of tools, as well as social manipulation.

Self-awareness and the theory of mind are the fourth building block. They’re areas which have been studied extensively in child development and are seeing exciting progress in primatology, along with the empathy that a theory of mind engenders in humans.

Culture — the transmission of knowledge between individuals and across generations — has been demonstrated in some primates, and I’d argue that it is vital to any measure of sentience. A subset of culture could be social organisation: social structures are genetically-defined in ants and bees, but in humans and some primates they’re driven by environmental and cultural forces. Useful measures might include the richness, durability, breadth and longevity of cultural communication. For example, a one-to-one demonstration of tool use would score lower than a video recorded in multiple languages, broadcast to billions and stored in a format that’s accessible across several generations.

Where’s the data?

Neuroscience can identify similar structures in human and non-human brains, but that doesn’t guarantee that they will do the same thing or prove a level of sentience. For any comparative measure to apply to artificial or extraterrestrial intelligences, it has to be a scale where the sentient creature is a black box. It doesn’t matter what’s in there: the only evidence is how it responds to the test.

There’s a wealth of information about animal intelligence and an abundance of data about human intelligence. By far the greatest challenge in this endeavour is to create standardised tests which attempt to measure the same thing in creatures with very different contexts and abilities to interact with their environment.

Developmental studies in human infants show that it is possible to test intelligence without a shared language. Cognitive psychologists have been testing animals for many years and there are many measures of IQ-type intelligence which can be measured comparatively.

Emotience is more challenging to measure across species. By their nature, emotions are subjective states which we communicate through language

Social and cultural aspects of sentience may have to rely on an observational approach. This adds our biases to the problem, but these features would be disrupted in an environment imposed by humans.

Where next?

I wrote the original version of this post in 2017, when I was pondering a master degree’s in primatology. In the end, I went for an MA in creative writing.

Since then, it’s informed my creative writing and the way I think about intelligence. The notion of a measure for comparative intelligence was inspired by another writer, the late, great Iain M Banks. There’s a short story (I can’t remember the name) in which a 0.5-level space suit continues to carry its human passenger to his destination, long after they’ve died.

If only I had a 2.0-level intelligence, I’d love to pursue a second master’s alongside writing novels. My 0.9-level mind will just have to settle for one challenging task at a time.

* If you believe in a supernatural creator, this begs the question: what if God was less sentient than man? It would explain why the world is full of deadly animals, diseases, and natural disasters. Maybe I’ll just start the Church of God the Idiot. It would probably make me rich.

One reply on “Comparative Intelligence: a scale for cross-species sentience and AI”

[…] If humans do create conscious or sentient AI, it may be as emergent and possibly as accidental as our own. We may not even be aware of it, and while Lemoine is wrong about LaMDA, another Lemoine in fifty or a hundred years may be silenced when he’s right. And human history suggests that organisations like Google will not be keen to acknowledge the rights of a sentient AI. We’ll need transparency and a reliable metric that doesn’t privilege humans. […]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.