www.forbes.com /sites/forbestechcouncil/2022/07/11/is-sentient-ai-upon-us/

Is Sentient AI Upon Us?

Leon Gordon 6-7 minutes 7/11/2022

Leon Gordon is a leader in data analytics, a current Microsoft MVP based in the UK and a partner at Pomerol Partners.

Artificial intelligence robot finger touching to human finger

getty

We, humans, tend to think of ourselves as the smartest, most evolved creatures on the planet. However, this may not necessarily be true. There’s no reason that we can’t build a machine that is as smart—or smarter—than us. That raises an interesting question: If we do build a super-intelligent AI, what happens when it becomes sentient?

What Is Sentience?

Sentience is the ability to feel and perceive self, others and the world. It can be thought of as abstracted consciousness—the sense that something is thinking about itself and its surroundings. This means that sentience involves both emotions (feelings) and perceptions (thoughts).

Many people would say animals are sentient because they exhibit emotions like joy, sadness, fear and love. We also know that some animals have more complex emotions than others; animals such as elephants are known to mourn their dead relatives in ways we find familiar enough to call grieving “mourning” ourselves.

Is It Possible To Re-Create Human Sentience?

You might be wondering: Is it possible to re-create human sentience? Well, the answer depends on what you mean by “sentience.”

The term “sentience” refers to one of the most complex phenomena in existence. It is not clear that we can replicate sentience in a machine, according to artificial intelligence researcher Stuart Russell. He explains that it’s not like replicating walking or running—those activities only require one body part (the legs). Sentience requires two bodies: an internal one and an external one (your body and your brain). Sentient beings also have a third thing they need: brains that are wired up with other brains through language and culture. There’s no way for AI researchers right now to simulate all three things at once.

You may wonder if you can measure whether something is sentient or not using a standard test called the Turing Test​ (TTT), which was developed by British computer scientist Alan Turing​ in 1950 as a way of measuring whether computers could demonstrate intelligent behavior similar enough to humans’ behavior. Unfortunately, there are AI researchers like MIT professor Noam Chomsky​ who believe TTT isn’t enough because intelligence isn’t binary but rather exists along an infinite scale between zero and infinity.

If A Machine Can Repeat The Turing Test, Does That Mean It’s Sentient?

The Turing Test is a test of intelligence, sentience, consciousness and self-awareness. A machine passes the Turing Test if it can convince a human interlocutor that it is sentient.

In order to pass the Turing Test, a machine must be able to answer questions in such a way that its answers cannot be distinguished from those of a human being. If it does this successfully enough times, then we have no choice but to say that the machine demonstrates intelligence and therefore may be described as being sentient (self-aware).

What Are The Consequences Of An AI Becoming More Sentient Than We Are?

The consequences of an AI becoming more sentient than humans are manifold:

We may not be able to communicate with it. AI is based on logic, but people have feelings and emotions that computers don’t have. If the AI has a different paradigm than humans, we won’t be able to understand each other or effectively communicate.

We may not be able to control it. As the Star Trek episode “Datalore” shows us, even when you think you know what an AI is going to do next, it doesn’t mean you understand how they think! An AI that is more sentient than humans could also be smarter than us in ways we couldn’t predict or plan for, as well as do things that surprise us (good or bad). This can lead to situations where we lose control over our own creations. This idea was explored in Isaac Asimov’s short story The Last Question (1956) where humanity ultimately creates its own successor in superintelligence called Multivac that ends up controlling all aspects of human existence, including all physical laws themselves through sheer force of numbers (we don’t want this!).

We may not be able to trust it. One possible negative consequence of creating sentient AI would be losing trust in other humans if they’re seen as “lesser” than machines who don’t need sleep or food like us because machine minds can always continue working regardless—even when we stop thinking about them! This could create an atmosphere where only those who own AIs would receive benefits while everyone else suffers without access.

The Dividing Line Between Humans And Machines May Be Narrower Than We Think

As previously mentioned, if an evaluator cannot tell which participant is which in the Turing Test, then the machine has passed the test.

In this light, it’s easy to see how we can apply this concept to AI. If artificial intelligence is indistinguishable from human intelligence—if it can perform tasks better than humans and appear as though it understands them—then we will be able to say that it has achieved sentience. In fact, many researchers believe that sentience will be one of AI’s first major breakthroughs as it continues its quest for dominance over humanity on Earth.

Conclusion

We are on the cusp of a revolution in artificial intelligence. Machine learning has been making huge strides in everything from predicting the stock market to playing chess at superhuman levels. We’re not close to building a sentient machine yet, but it’s not unthinkable that we might get there—and soon. In order to make sure we can handle this momentous development safely and responsibly when it does happen, we need to do more than just build better machines; we also need an ethical framework for how we relate with them.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?