Artificial Intelligence

Can AI Become Conscious? And Why You Should Care

robot
Credit: Getty images

Last summer, a Google employee was fired after he went public with his theory that Google's language technology, LaMDA, which underpins its ChatGPT rival Bard, is sentient and should have its "wants" respected. But Google CEO Sundar Pichai recently told CBS the employee did not "fully understand" how Bard worked, and Google has maintained LaMDA was doing exactly what it had been programmed to do: communicate in a human-like way.

If Bard or any Artificial Intelligence (AI)-powered application is sentient, then this goes beyond having its “wants” respected. It opens up a whole slew of ethical, moral, legal and safety concerns, which as of now have been omitted from the discussion on AI innovations and investments.

Once we go that route – when we create the first sentient AI-powered application – there is no turning back. We will have created a sentient being, like a baby, and with this baby comes joy but also responsibility, as well as moral, ethical, and other concerns both as a parent and as part of a society.

Since Google dismissed the employee’s beliefs, this discussion on AI and consciousness faded away until ChatGPT came to the surface. The hype of ChatGPT in late 2022 ignited much excitement and nervousness around AI. It is the big buzzword not only in media, but in big tech and big banks. Huge amounts of investor money is pouring into AI-related projects.

ChatGPT became an instant viral sensation, the populist "face" of AI, with millions of people trying it out. Using the internet as a database, it can give written answers to questions in a natural, human-like way. Microsoft, which has invested heavily in OpenAI, says AI can take "the drudgery" out of mundane jobs such as office administration. A recent report by Goldman Sachs suggests AI could replace the equivalent of 300 million full-time jobs. While the AI industry will create new human jobs, they are likely to require new skills.

But it’s not just about whether AI will automate human jobs. Could there be a day when your co-worker is a sentient AI-powered robot?

This concept goes beyond the workplace. There is a new use case evolving, the AI companion. Why people are seeking for an AI companion instead of a human, and the implications of what that entails to our future society, are questions for a different time and a different article. For now, we will focus on whether AI can be conscious and what this means for us.

The creation of artificial life has been the subject of science fiction for decades, while philosophers have considered the nature of consciousness. Technologists broadly agree that AI chatbots are not self-aware just yet, but there is some thought that we may have to re-evaluate how we talk about sentience. Chatbots are great at mimicking human speech, but they're not aware just yet.

A few people, however, have - argued that some AI programs as they exist now should be considered sentient. Ilya Sutskever, co-founder of OpenAI, the company behind ChatGPT, has speculated that the algorithms behind his company’s creations might be “slightly conscious.”

It is interesting that the co-founder of Open-AI is postulating that ChatGPT might be “slightly conscious” without recognizing the implications of this argument. In an interview to ABC, OpenAI's CEO, Sam Altman, expressed his concerns about the potential dangers of advanced AI, saying that despite its "tremendous benefits," he also fears the potentially unprecedented scope of its risks.

"The current worries that I have are that there are going to be disinformation problems or economic shocks, or something else at a level far beyond anything we're prepared for," he added. "And that doesn't require superintelligence." 

Altman does not recognize the perils with ChatGPT but rather with its competitors: "A thing that I do worry about is... we're not going to be the only creator of this technology. There will be other people who don't put some of the safety limits that we put on it.” But if indeed ChatGPT is "slightly conscious," it might not be that OpenAI is innovating responsibly as Altman claims.

Without innovating responsibly, our investments in these innovations are at stake. After all, innovating responsibly equates to investing responsibly.

In early July, The U.S. Federal Trade Commission has launched an investigation into ChatGPT creator OpenAI and whether the artificial intelligence company violated consumer protection laws by scraping public data and publishing false information through its chatbot. If Ilya Sutskever is correct about ChatGPT being conscious, it seems that OpenAI might have violated more than consumer protection laws, and then the investigation might need to include the ethical and regulatory concerns related to a sentient application.

How should we make sure we innovate responsibly? How can we be cognizant that OpenAI and others like them understand the implications of their innovations? Before we answer these questions, let’s first understand what consciousness is and what the safety, legal, and moral implications on AI gaining sentient ability are.

What is consciousness?

For centuries, physicists, philosophers, neuroscientists, theologians, linguists, and all sorts of scientists have been debating what makes us conscious, and how can one explain consciousness. Needless to say, the jury is still out. For example, philosophers like David Chalmers suggest that consciousness cannot be explained by today’s science. Understanding it may even require a new physics – perhaps one that includes a different type of stuff from which consciousness is made.

Although precise definitions are hard to come by, intuitively, we all kind of “know” what consciousness is. It is what goes away when we're under general anesthesia or when we fall into a dreamless sleep, and returns when we wake up. When we open our eyes, our brain does not just process visual information. There is another dimension entirely – our minds are filled with light, color, shades, shapes, emotions, thoughts, beliefs, intentions – all of which feel a particular way to us.

People are fascinated by how sophisticated and “intelligent” these AI algorithms are, and how they can become even more “intelligent.” But intelligence and consciousness are different things: intelligence is about doing, while consciousness is about being.

The history of AI has focused on the former and ignored the latter. If a machine ever did exist as a conscious being, how would we ever know? The answer is entangled with some of the biggest mysteries about how our brains and minds work.

A few hundred years ago the accepted view was that only humans were conscious. Animals, lacking souls, were seen as mindless robots. Today some believe that if we are conscious, then there is little reason not to believe that mammals, with similar brains, are conscious too. And why draw the line at mammals? Birds appear to reflect when they solve puzzles. Most animals, even invertebrates like shrimp and lobsters, show signs of feeling pain, which would suggest they have some degree of subjective experience.

But we don’t know what a bat or a bird or an octopuses may “feel” or “experience,” even though we might agree that they do have consciousness. It could be that AI-machines might someday have consciousness, and it would probably be different than human consciousness, very much like a bat is different than a human.

Even if we are not certain whether machines can gain consciousness and if they do how that might be, it is important to understand the implications of such a scenario for our future society.

Why does it matter whether machines can gain consciousness? What are the implications?

Whether machines can become sentient matters for ethical, moral, legal and safety reasons. If computers experience life through their own senses, they cease to be purely a means to an end determined by their usefulness to us humans. They become an end unto themselves.

As the German philosopher Thomas Metzinger has noted, this would precipitate an unprecedented moral and ethical crisis because once something is conscious, we have a responsibility toward its welfare, especially if we created it. To evoke a sci-fi classic, the problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel. This explains ex-Google employee's claims that “[it] is sentient and should therefore have its "wants" respected.”

Once you have created a sentient “creature” – whether human or machine – you become responsible for its existence, with all sorts of moral and ethical concerns. Would you kill your “baby”? Would that fall under accepted moral and legal norms? Obviously not, so then why would you press “delete” on your sentient application? Wouldn’t that be the same as “killing” your baby?

There is another concern. If the machine is conscious, it may endow AI systems with new powers and capabilities that could wreak havoc if not properly designed and regulated. Ensuring that AI systems act in ways compatible with well-specified human values is hard enough as things are.

With conscious AI, it gets a lot more challenging, since these systems will have their own interests rather than the interests humans give them. And if you give them free will, which ethically would be the “right” thing to do, and their “free will” does not align with ours, is it so far-fetched to imagine a scenario where we may end up taken over by machines, like in the science fiction movie The Matrix?

Innovating responsibly

The lack of consensus about consciousness, and the rapid change in the landscape of AI, highlights the need for more research into consciousness itself. Without a principled and experimentally verified understanding of how consciousness happens, we’ll be unable to say for sure when a machine has or doesn’t have it. In this murky situation, artificial consciousness may arise accidentally, as a byproduct of some other functionality the tech industry installs in the next generation of their algorithms.

Last April, the Association for Mathematical Consciousness Science (AMCS), compiled an open letter titled "The responsible development of AI agenda needs to include consciousness research." The letter said it did not have a view on whether AI development in general should be paused, but it pushed for a greater scientific understanding of consciousness, how it could apply to AI and how society might live alongside it.

Around the same time, Anka Reuel of Stanford University and Gary Marcus, a leading voice on AI, called for the establishment of a global, neutral, and non-profit “international agency for AI” to coordinate global regulation of AI technologies. It might be wise that the formation of such an agency should cover artificial consciousness as well.

There is too much at stake when not innovating responsibly. The ramifications go beyond monetary loss of investments and might impact our future existence as a society. Trusting companies to do the “right” thing and innovate responsibly may not be enough. As a society, we have a responsibility not just to our own generation, but to the future generations to come.

The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.

Merav Ozair, PhD

Dr. Merav Ozair is a global leading expert on Web3 technologies, with a background of a data scientist and a quant strategist. She has in-depth knowledge and experience in global financial markets and their market microstructure.

Read Merav's Bio