Categories
AI Creative writing In Machina Psychology Science Fiction

Embodied intelligence: bad news for transhumanists, great news for AIs

Uploading your mind to a computer is a sci-fi trope that’s almost as old as the electronic computer.1 Yet the more I write about AI, the more I think that it’s never going to happen. The reason: embodied intelligence.

The Matrix catapulted mind-uploading into the mainstream. [Warner Bros]

Consciousness uploads are also a notion that gets transhumanists and some sci-fi fans extremely agitated. From The Matrix to Black Mirror, uploading has been popularised so broadly that it feels almost inevitable.

Back in 2020, I began a novel about AI, beginning with some research into AI, neuropsychology and consciousness. Working title: In Machina. It’s a good 20 years since my psychology degree, so I was a bit rusty on the state of the art.

Solve the science and the engineering will follow

I had a notion that my universe would include mind uploading, so I started there. It spiralled into a maze of problems that I divided into two camps: engineering and basic science. It turns out these are a good place to start with any sci-fi technology.

Engineering problems can be solved; it’s just matter of time and resources. Fundamental science problems require us to rewrite the laws of physics in a new paradigm. The change from Newtonian physics to relativity is the classic example. Even then, plenty of SF finds it a chore to follow Newton’s laws, let alone Einstein’s.

The most common example in SF is faster than light travel. FTL is everywhere, but it falls firmly into the realm of fundamental problems with basic physics.

It soon became apparent that the problems of mind uploading fall into both basic science and engineering. They encompass not only neuropsychology, medicine and engineering, but fundamental questions of consciousness and the self.

Mind your head

Even on an engineering basis, uploading a human mind to a machine is a major challenge. The scale of the human brain is staggering: 86 billion neurons is our best guess2, packed into our skulls with the same number of non-neuron cells that support and protect them.

How difficult? It’s hard to compare conventional computing power to neurons. Computers are essentially very powerful digital calculators that are measured in how fast they perform. Neurons are biological cells which respond to a range of external triggers, at a speed appropriate to their environment.

It’s like comparing apples to curtains, but one way to bridge the gap is with neuromorphic processors. These super-chips seek to mimic neurons in hardware as well as softwar, by simulating the brain.

SpinnAker is the world’s largest neuromorphic computing platform, part of the European Human Brain Project 3. Each of its one million processing cores can emulate 1,000 neurons, putting us in the billion-neuron territory. Its successor is expected to hit 10 billion neurons.

SpiNNaKer is the closest thing we have to an artificial brain [University of Manchester]

If engineering is all about time and resources, then emulating 80 billion neurons by 2050 isn’t hard to imagine. These are still room-sized machines with 100 kW power demands, but it’s an engineering challenge.

The challenge for upoading is that while computers may sit in rooms, brains don’t live in jars. They’re part of our living, moving bodies.

The brain sits in a soup of chemicals which supplies energy and removes waste. It also changes neurons’ behaviour, carrying neurotransmitters created by other parts of our brains and our bodies. Our neurons are connected to our senses and control our bodies via the nervous system. We are networked beings.

Waiter, there’s a mind in my soup

If the brain’s soupy setting is fairly well-controlled, its connection to the body is not. Our thoughts are subject to everything in our bloodstream: hormones, fats, sugars and toxins. Our skin, guts and lungs are gateways to the outside world. The blood-brain barrier is far from impervious, and psychoactive chemicals have little trouble reaching our neurons.

There’s also increasing evidence that humans are symbiotic creatures, connected to micro-organisms in our guts and upon our skin. We’ve only recently begun to understand the impact of these symbiotes on our behaviour, but we know that they affect experiences such as mood and appetite.

So how our brains operate, what we think and how we feel depends upon a dynamic relationship with our bodies. Forget about 80 millions neurons. Our minds are an emergent property neurons interacting with our 30 trillion cells, plus an unknown quantity of microscopic symbiotes and parasites.

Black Mirror’s San Junipero: your own personal afterlife [Netflix]

This is one place where the challenge of mind uploading strays from engineering challenges into basic science. Not only do we barely understand consciousness, we do not understand what our consciousness would be if was separated from our bodies.

If you believe that our consciousness is not a spiritual entity, but something which arises from biological activity, then you have to accept that the human brain is not the sole repository of the human mind. Our brains — and the minds which arise from them — are part of a network which encompasses everything within our bodies.

Our minds are a product of evolution, just like our bodies, and they’ve evolved as a function of our existence within these bodies. We are an embodied intelligence.

Artificial minds, real robots

It’s a poor prospect for the uploaders, but it’s fertile ground for artificial intelligence. Proponents of AI often focus on increasing computing power and complexity. Sentience is reduced to a function of scale. Make a big enough computer, turn it on and — hey presto! — your AI is born.

Yet the disembodied AI is another SF trope that doesn’t stand up to great examination. If our intelligence is a function of our evolution within the terrestrial environment, then it’s reasonable to assume that AIs will also require environmental drivers. If that’s true, then they’ll need something that can physically interact with that environment. A body, for want of a better word.

It might not be a bad thing.

We’ve already seen what happens when you evolve today’s very limited proto-AIs in a virtual environment: racist bots spouting neo-nazi slogans. I find it unlikely that anything which exists in an entirely virtual environment could develop a practical intelligence or interact usefully with humans.

As with every living creature, they will have needs (power, protection from hostile environments, not being switched off) and they’ll have to set goals to meet their needs. That might also lead to emotions.

On a cognitive level, after all, emotions are the conscious experience of our needs for food, shelter, sex and so on. We desire something, we fear that we might not get it, we feel anger when we’re denied and pleasure when we’re fulfilled. The carrots and sticks are supplied by our own minds.

This brings us to another SF trope: the cold, emotionless AI.

Emotionless reason is literary treason

If our AI achieves consciousness, it will experience a reaction to its needs being realised or frustrated. Humans might choose to programme carrots and sticks into an artificial mind, driving it to achieve its goals. They might be emergent properties. That experience won’t be the same as a human emotion; it will be an analogue (even if it’s hosted in a digital environment).

That’s also true for every non-human intelligence, whether it’s an ape or a whale. We don’t know what it feels like for a bonobo to be lonely, but we can assume that it’s similar to our own feelings. And that’s good enough for a writer.

The linguistic toolkit of human emotions and cognition can be applied to any intelligence. The key is to remain conscious that it’s a metaphorical description, driven by needs which are analogous to our own.

Writing experiments

Being a science fiction writer means that you’re a bit like an experimental scientist: you change an aspect of the environment and see how it affects your world. Reality is your control group.

When I was developing the In Machina universe, I decided to insist that neither AI nor human intelligence could function without environmental interaction. Sure enough, artificial intelligence comes with artificial emotions. To make that happen, I needed embodied intelligence.

I call my embodied AIs mecs, to differentiate from the pre-AI robots which came before them. All of them exist within some kind of robotic body, from human-scale mecs to AI-controlled spacecraft.

Human scale doesn’t mean humanoid. I have hovering mecs with a couple of arms (known as BoBs), customisable humanoid forms (DaViDs and ZiGys) and animal forms (HaNiBaLs). The scale of their existence, the environments they exist in, and their physical form affect their consciousness and personalities.

The drama for my AIs and the humans around them arises when their needs and their goals do not align. Emotions can be beautiful things when intelligent creatures are united in purpose, and ugly when they’re in conflict. Fortunately for the writer, drama results. I hope it’s fortunate for the reader, too.

  1. My writer brain is now going to become obsessed with a mad steampunk genius uploading minds into a Babbage-style computer. It will be messy. ↩︎
  2. Herculano-Houzel S. The human brain in numbers: a linearly scaled-up primate brain. Front Hum Neurosci. 2009;3:31. doi:10.3389/neuro.09.031.2009 ↩︎
  3. The Human Brian Project is a post-Brexit British science endeavour to create the most ordinary man possible. ↩︎

One reply on “Embodied intelligence: bad news for transhumanists, great news for AIs”

Mentions

  • 2023: a year in posts | Alexander Lane

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.