Subscribe now

Mind

True nature of consciousness: Solving the biggest mystery of your mind

Far from being a mystical “ghost in the machine”, consciousness evolved as a practical mental tool and we could engineer it in a robot using these simple guidelines

By Michael Graziano

18 September 2019

head artwork

Oska

Two years after opening our bureau in New York, we are delighted to share that New Scientist is launching a new live event series in the US. This kicks off on 22 June in New York with a one-day masterclass on the science of the brain and human consciousness. To celebrate, we have unlocked access to five in-depth features exploring mysteries of the human mind.

CONSCIOUSNESS is a slippery concept. It isn’t just the stuff in your head. It is the subjective experience of some of that stuff. When you stub your toe, your brain doesn’t merely process information and trigger a reaction: you have a feeling of pain. When you are happy, you experience joy. The ethereal nature of experience is the mystery at the heart of consciousness. How does the brain, a physical object, generate a non-physical essence?

This experience-ness explains why pinning down consciousness has been described as “the hard problem”. Subjective experience doesn’t exist in any physical dimension. You can’t push it and measure a reaction force, scratch it and measure its hardness or put it on a scale and measure its weight. Philosophers have described it as the “ghost in the machine”. Even scientific ideas about consciousness often have an aura of the metaphysical. Many scientists describe it as an illusion, while others see it as so fundamental that it doesn’t have an explanation. Always at the centre of the riddle lies its non-physicality.

But what if consciousness isn’t so mystical after all? Perhaps we have just been asking the wrong question. Instead of trying to grapple with the hard problem, my colleagues and I at Princeton University take a more down-to-earth approach. My background lies in the neuroscience of movement control, what you could call the robotics of the brain. Drawing on that, I suggest that consciousness can be understood best from an engineering perspective. Far from being some sort of magical property, it is a tool of extraordinary power. It is a tool that can be engineered into machines. Our new approach shows how.

Because the normal methods of observation and measurement don’t quite apply, the study of consciousness has always sat uneasily in mainstream science. A few decades ago, The International Dictionary of Psychology described consciousness as “a fascinating but elusive phenomenon; it is impossible to specify what it is, what it does or why it evolved. Nothing worth reading has ever been written about it.” Since then, consciousness has become an increasingly popular topic in science, generating numerous ideas and thousands of papers but very little agreement.

New Scientist Default Image

A one-day masterclass on consciousness

Join us in New York City on 22 June for an Instant Expert event on the latest science of consciousness and the human brain.

One approach searches for the neural correlates of consciousness: the minimal physical signature in the brain needed for subjective experience. There have been some interesting leads, but the hunt continues. Other researchers build on the insight that consciousness isn’t just a stimulus processed in the brain. Their higher-order thought theory proposes that the brain contains a system that re-represents the stimulus at a higher level with added self-information, which is how we become conscious of it. Exactly what that higher-order information is, what cognitive purpose it serves and where in the brain it is constructed are all debated – although some people associate it with the prefrontal cortex.

“Far from being some sort of magical property, consciousness is a tool of great, practical power”

A particularly influential idea is known as global workspace theory. Here, information coming both from outside and within the brain competes for attention. Information that wins this competition becomes globally accessible by systems throughout the brain so that we become aware of it and are able to process it deeply. Also popular is the integrated information theory. It sees consciousness as an emergent property of complex systems and posits that the amount of consciousness in any system can be measured in units called phi. Phi is high in the human brain, but also present in everything from a hamburger to the universe, since everything contains at least some integrated information.

arm in mirror

Sotiris Bougas / EyeEm/Getty

Then there is the idea that consciousness is an illusion. This is often misinterpreted. It doesn’t mean that consciousness doesn’t exist, or that we are fooled into thinking we have it. Instead, it likens consciousness to the illusion created for the user of a human-computer interface and argues that the metaphysical properties we attribute to ourselves are wrong. Researchers debate the exact source of these mistaken self-descriptions and the reason we seem to be mentally captive to them.

Mind control

Engineering, and the science of robotics in particular, tells us that every good control device needs a model – a quick sketch – of the thing it is controlling. We already know from cognitive neuroscience that the brain constructs many internal models – bundles of information that represent items in the real world. These models are simplified descriptions, useful but not entirely accurate. For example, the brain has a model of the body – called the body schema – to help control movement of the limbs. When someone loses an arm, the model of the arm can linger on in the brain so that people report feeling a ghostly, phantom limb. But the truth is, all of us have phantom limbs, because we all have internal models of our real limbs that merely become more obvious if the real limb is gone.

By the same engineering logic, the brain needs to model many aspects of itself to be able to monitor and control itself. It needs a kind of phantom brain. One part of this self-model may be particularly important for consciousness. Here’s why. Too much information flows through the brain at any moment for it all to be processed in equal depth. To handle that problem, the system evolved a way to focus its resources and shift that focus strategically from object to object: from a nearby object to a distant sound, or to an internal event such as an emotion or memory. Attention is the main way the brain seizes on information and processes it deeply. To control its roving attention, the brain needs a model, which I call the attention schema.

Our attention schema theory explains why people think there is a hard problem of consciousness at all. Efficiency requires the quickest and dirtiest model possible, so the attention schema leaves aside all the little details of signals and neurons and synapses. Instead, the brain describes a simplified version of itself, then reports this as a ghostly, non-physical essence, a magical ability to mentally possess items. Introspection – or cognition accessing internal information – can never return any other answer. It is like a machine stuck in a logic loop. The attention schema is like a self-reflecting mirror: it is the brain’s representation of how the brain represents things, and is a specific example of higher-order thought. In this account, consciousness isn’t so much an illusion as a self-caricature.

“In this account, consciousness isn’t so much an illusion as a self-caricature”

A major advantage of this idea is that it gives a simple reason, straight from control engineering, for why the trait of consciousness would evolve in the first place. Without the ability to monitor and regulate your attention, you would be unable to control your actions in the world. That makes the attention schema essential for survival. Consciousness, in this view, isn’t just smoke and mirrors, but a crucial piece of the engine. It probably co-evolved with the ability to focus attention, just as the arm schema co-evolved with the arm. In which case, it would have originated as early as half a billion years ago.

face

The big challenge will be giving a robot human-like sensory and emotional input

Manana Kvernadze / EyeEm/Getty

Sometimes, the best way to understand a thing is to try to build it. According to this new idea we should be able to engineer human-like consciousness into a machine. It would require just four ingredients: artificial attention, a model of that attention, the right range of content (information about things like senses and emotions) and a sophisticated search engine to access the internal models and talk about them.

The first component, attention, is one of the most basic processes in most nervous systems. It is nicely described by the global workspace theory. If you look at an object such as an apple, the brain signals related to the apple may grow in strength and consistency. With sufficient attentional enhancement, these signals can reach a threshold where they achieve “ignition” and enter the global workspace. The visual information about the apple becomes available for systems around the brain, such as speech systems that allow you to talk about the apple, motor systems that allow you to reach for it, cognitive systems that allow you to make high-level decisions about it, and memory systems that allow you to store that moment for possible later use.

Scientists have already built artificial versions of attention, including at least a simple version of the global workspace. But these machines show no indication of consciousness.

The second component that our conscious machine requires is an attention schema, the crucial internal model that describes attention in a general way, and in so doing informs the machine about consciousness. It depicts attention as an invisible property, a mind that can experience or take possession of items, something that in itself has no physical substance but still lurks privately inside an agent. Build that kind of attention schema, and you will have a machine that claims to be conscious in the same ways that people do.

The third component our machine needs is the vast stream of material that we associate with consciousness. Ironically, the hard problem – getting the machine to be conscious at all – may be the easy part, and giving the machine the range of material of which to be conscious may be the hard part. Efforts to build conscious content might begin with sensory input, especially vision, because so much is known about how sensory systems work in the brain and how they interact with attention. But a rich sensory consciousness on its own won’t be enough. Our machine should also be able to incorporate internal items such as abstract thought and emotion. Here the engineering problem becomes really tricky. Little is known about the information content in the brain that lies behind abstract thought and emotion, or how they intersect with the mechanisms of attention. Sorting out how to build a machine with that content could take decades.

Talking my language

The final component our conscious machine requires is a talking search engine. Strictly speaking, talking isn’t necessary for consciousness, but for most people the goal of artificial consciousness is a machine that has a human-like ability to speak and understand. We want to have a good conversation with it.

The problem is deceptively hard. We already have digital assistants like Siri and Alexa but these are limited in their functions. You give them words, they search for words on the internet, and they then give you back more words. If you ask for the nearest restaurant, the digital assistant doesn’t know what a restaurant is, other than as a statistical clustering of words. In contrast, the human brain can translate speech into non-verbal information and back again. If someone asks you how the taste of a lemon compares with that of an orange, you translate the speech into taste information and compare the two remembered tastes, then translate back into words to give your answer. This easy back-and-forth conversion between speech and many other information domains is challenging to do artificially. Our conscious machine would need to correlate information across every imaginable domain, a problem that hasn’t yet been solved in artificial intelligence.

“To engineer human-like consciousness into a machine would require four ingredients”

Given all the promise and all the difficulties, just how close are we to conscious machines?

If the attention schema approach is correct, the first attempts at visual consciousness could be built with existing technology. But it will take a lot longer to give machines a human-like stream of consciousness. It will take time to build a conscious machine capable of seeing, hearing, tasting, touching, thinking abstract thoughts and feeling emotions, with a single integrated focus of attention to coordinate within and between all those domains, and able to talk about that full range of content. But I believe it will happen.

To me, though, the purpose of this thought experiment isn’t to advocate for conscious robots. The point is that consciousness itself can be understood. It isn’t an ethereal essence or an inexplicable mystery. The attention schema theory puts it in context and gives it a concrete role in adaptation and survival. Instead of an ill-defined epiphenomenon, a fog extruded by the brain and floating between the ears, consciousness becomes a crucial component of the cognitive machine.

Topics:

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox! We'll also keep you up to date with New Scientist events and special offers.

Sign up