Categories
Creative writing Psychology Science

Comparative sentience: what does it mean to be smart?

Comparing the relative sentience of human and non-human intelligences starts with defining the aspects of sentience, even before you break them down into specific characteristics.

I’ve brought this post over from Medium because it was cited in December 2022 by Maurice Yolles at Liverpool John Moores University for Consciousness, Sapience and Sentience—A Metacybernetic View in the journal Systems.

I previously sketched an overview of imagining a measure of Comparative Sentience, and the first task I identified was a set of semantic challenges.

  • What are sentience, sapience, consciousness, intelligence and emotion?
  • What are the differences and similarities between these aspects?
  • Do we need to measure and compare some or all of them?
  • Is sentience the best overall term?
A chimpanzee with a tool

1. Sentience

The Oxford English Dictionary offers a surprisingly narrow definition of sentience as “the ability to perceive or feel things”, while Merriam-Webster is slightly more helpful with “feeling or sensation as distinguished from perception and thought”.

In the philosophy of animal rights, sentience is a reference to subjective experiences, which crucially includes the capacity to experience pain and pleasure, and animal sentience is recognised in the EU’s 1997 Treaty of Amsterdam.

There is a proposed comparative measure of sentience. The Sentience Quotient was coined by the molecular engineering researcher Robert A Freitas Jr in 1977 and defines sentience as the relationship between the information processing rate (bit/s) of each individual processing unit (such as a neuron), the weight and size of a single unit and the total number of processing units (expressed as mass). It’s a measure of efficiency which makes humans almost identical to primates and little different to insects, but it’s of little use here because it says nothing about the quality of sentient experience.

2. Sapience

The OED and M-W dictionaries both describe sapience interchangeably with wisdom and sagacity: “Having or showing experience, knowledge, and good judgement”, although Collins is more expansive: “the ability or result of an ability to think and act utilizing knowledge, experience, understanding, common sense, and insight”.

Arguably, the use of information integrated from different types of experience is the distinctively human feature which justifies the title of homo sapiens, encompassing other aspects of intelligence such as the culture and communication required to build a body of knowledge, understanding those around us, and understanding how to manipulate the environment.

Not surprisingly, wisdom is highly regarded by religions, either as a facet of godhood of which humans can only achieve a limited portion, or as the path to ultimate enlightenment and godhood.

3. Consciousness

One of the great mysteries of philosophy isn’t going to be settled in a blog post: there is no widely accepted operational definition of consciousness, or how and why it exists, but it is something that everyone agrees people have, and it is probably present in some animals.

The OED calls consciousness “the state of being aware of and responsive to one’s surroundings” and “the fact of awareness by the mind of itself and the world”. Merriam-Webster gives a range of definitions, most usefully “the quality or state of being aware especially of something within oneself”, “the state of being characterized by sensation, emotion, volition, and thought” and “the upper level of mental life of which the person is aware as contrasted with unconscious processes”.

While some of this is a broader version of sentience, the key aspects that make consciousness different from sentience seem to be self-awareness and the contrast with unconscious activity.

Philosophers continue to argue that there are different types and forms of consciousness, which might allow us to form a hierarchy of consciousness if these states can be measured and tested across species. The problem is that consciousness is at least partially subjective, so we would have to accept that any being which communicates an understanding of the concept and claims ownership of it is, therefore, conscious.

In the absence of direct communication, psychologists typically use the mirror test to establish self-awareness, and have found it across a wide range of species to a degree which may even be quantifiable. Behavioural tests can demonstrate both volition and a theory of mind to different degrees across apes, monkeys, and other animals.

4. Intelligence

Often defined broadly, the OED says intelligence is “the ability to acquire and apply knowledge and skills” while M-W defines it as “ the ability to learn or understand or to deal with new or trying situations” and “the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria”.

While the aspects of knowledge and awareness are similar to sapience, intelligence is distinguished by analysis and the application of knowledge to solve problems, which require volition and planning, and may require innovation too.

5. Emotion

While M-W gives a simple definition as “the affective aspect of consciousness” it helpfully provides a more complex one: “a conscious mental reaction (as anger or fear) subjectively experienced as strong feeling usually directed toward a specific object and typically accompanied by physiological and behavioral changes in the body”. OED is illuminatingly different: “Instinctive or intuitive feeling as distinguished from reasoning or knowledge” or “A strong feeling deriving from one’s circumstances, mood, or relationships with others”.

The crucial aspect for this project is that emotion is a conscious experience, not just a reaction such as retreating from a source of pain, but at the same time, it is separate from sapience or intelligence. It’s worth including in this project because it has a role in learning and volition, and because there’s a large body of work on emotions in humans and animals, although there’s also a great deal of debate about whether non-human animals experience emotional states in the same way as humans, and the role of cognition in human emotional experiences.

Common factors in aspects of sentience
Common semantic factors in aspects of sentience.

Conclusion

Looking at the common factors which emerge between the different aspects of sentience, it’s interesting that subjective experience is common to all except intelligence, while sentience shares all of its factors with sapience, consciousness and emotion.

No single aspect emerges as dominant, and intelligence sits apart from emotion (not surprisingly). The most effective approach seems to be to stick with Comparative Sentience.

One reply on “Comparative sentience: what does it mean to be smart?”


If you’re going to use AI for writing or research, you need to know two things. The first is that AI doesn’t know anything.

There are other important things to know about AI. It’s trained on content taken without permission of the creators. It contributes to rising greenhouse emissions and power consumption from technology companies.

But those are ethical concerns that people who choose to use AI will have to put aside for their consciences to deal with. AI’s lack of knowledge is a practical issue that could put you at risk if you publish work that has been created using AI. I’m not talking about moral risks here, but tangible legal and financial dangers.

If you know anything about AI, you’ll roll your eyes and wonder why these things have to be pointed out in 2024. The answer is that AI companies don’t like talking about them, and people who like to take shortcuts don’t look for reasons to avoid them.

AI companies are a lot like populist politicians. They want to sound authoritative, even when they’re spouting complete bullshit. When Open AI launched ChatGPT-4 Omni, it boasted about its speed and its charming personality, not about its reliability. And then had to withdraw its flagship voice model because it had imitated Scarlet Johansson without her consent.

This two-line story tells us everything we need to know about the ethics of tech entrepreneurs.

You cannot say that you weren’t warned.

AI doesn’t know anything. It’s a language machine

ChatGPT, Google Gemini, Claude and their ilk are all a type of computing called Large Language Models. These LLMs analyse written statements and create patterns of similarity which tell them how to mimic human language1.

In the simplest terms: word A goes next to word B more often than it goes next to word C.

They do it on a vast scale, creating hugely complex patterns from letters, syllables, words, sentences, essays, short stories, news reports, plays, books, speeches, web pages, online forums and so on.

I don’t know what intelligence is, artificial or organic. Many people argue that language is an essential part of it, but most people would also agree that intelligence also needs a way to store, sort and retrieve information. Knowledge, for want of a better word.

Large language models cannot do this, which will also come as a surprise to a lot of people who use AI tools. There are no facts inside ChatGPT, Claude or Gemini, no encyclopedia for them to check, only associations.

LLMs can build associations with very high probability, so if you ask ChatGPT who wrote 1984, it will tell you “George Orwell” with high confidence, because the correlation between 1984 and George Orwell is very high. If you ask who wrote Hamlet, it will tell you William Shakespeare, but it should tell you that there is some dispute about the authorship of Shakespeare’s plays2.

But it won’t, unless you ask a very specific question, because companies like OpenAI build their tools to look like they know things. What ChatGPT and the others don’t do, what they cannot do, is read Wikipedia or another source (preferably lots of them) to verify the correlation.

It’s already been fed Wikipedia and broken it down into probabilities, but it’s also been fed a lot of other information. It might have analysed a short story that says Shakespeare was a time-travelling alien. Unless that information is tagged as a creative work, ChatGPT cannot determine fact from fiction3.

ChatGPT’s response here indicates many of the sources on which it has been trained.

AIs don’t know why they lie. Or when they lie.

It gets worse, again because companies like OpenAI build their tools to look like they know things. When their statistical models don’t provide an answer, they use them to create answers that look and sound real.

This happens so often that it’s become known as a “hallucination”4.

You can have fun with them, as the author Charles Stross did when he asked Google Bard (now Gemini) for five facts about Charles Stross5. Then five more, and so on. The early statements were accurate, then it began to create spurious associations that might make sense for a left-leaning science fiction author. With each round, the accuracy declined and thehallucinations took precedence6.

Or you can be fined by a judge, as lawyers did in New York last year. They asked ChatGPT to help them research a case and it helpfully provided summaries and links to judgements in other, similar cases. There was just one problem: the cases didn’t exist. The links went nowhere but the lawyers didn’t check. The judge was not impressed and they not only lost the case, but it was America, so the client probably sued as well.

I’ve had numerous occasions when Google Gemini has lied about the reasons it failed to complete a task. Why? AI companies like build their tools to look like they know things.

And it will kill people. Amazon is already facing a deluge of AI-generated books about subjects like mushroom foraging which contain inaccurate information about poisonous fungi.

If AI doesn’t know anything and it tells lies, what good is it?

AI can be a great way to start a research project, but it cannot be the whole project. Follow the links and check the facts you’re given. Above all, remember that the AI is nothing more than a clever parrot, repeating sentences that it doesn’t understand.

Remember also that these AIs are also Large Language Models. They are very good at placing words next to each other in useful ways, particularly if you instruct them to mimic a particular style. Of course, there are legal and moral questions if you ask an AI to write something entirely on its own and you intend to pass it off as your own. There are legal and ethical questions if you ask it to imitate a particular writer, particularly one who’s working today.

But if you’ve written something and you want an AI to rewrite it in another style, then it becomes a tool to enhance your writing. You might want to sound more formal, archaic or to bring in the flavour of a dialect. AI’s like ChatGPT and Gemini can do that, and they do it very well. I’ll look at this in a future post.

Bing Copilot with DALL-E 3 created this image of “an AI vomiting words”. The AI added an audience of cheering children.

What’s the second thing I need to know before using AI?

For the second thing you need to know if you’re using AI, come back next week.

Just for fun, I asked ChatGPT-4O to “write a 500-word blog post in plain English about the limitations and risks of using LLM AIs as a knowledge-based research tool. Include in layman’s terms an explanation of how LLMs work and the occurrence of AI hallucinations.”

What do you think?

AI images of HAL-9000 vomiting and a robot vomiting words onto cheering children created by Bing Copilot using DALL-E3. What a future we live in.

This might be how humans learn language, but that’s another discussion. ↩︎I don’t have any skin in this argument. It’s just the first example that came to mind. ↩︎The answer also reveals some of the works which have been absorbed by ChatGPT. ↩︎Hallucinations might be the first sign of machine creativity, but that’s very controversial. ↩︎I can’t do this for myself because I share my name with a very talented American tech innovator. He’s justifiably more famous than I am. ↩︎I have now created a link which adds statistical veracity to those claims. ↩︎

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.