Editorial

Sentient AI — Is that What We Really Want?

Sentient AI — Is That What We Really Want?

Chirag Shah, University of Washington

As anyone who took a class in AI knows, for a long time in the field, there was a test named after the father of AI, Alan Turing, to see if a system is really intelligent. The Turing Test proposed that the system and a human be placed behind a curtain (or anything that obscures their true identity), and have a user interact with them. If the user can’t tell which one is a human and which one is a machine, the system passes the test, and deemed to be intelligent. Originally called the “imitation game”, Turing Test has been around since the 1950s. While it’s been a standard component of an AI course for decades, it has also been criticized from many aspects. As the chatter about sentient AI has become more prominent in the mainstream media, what Turing Test is and what it implies for us needs to be resurfaced for all audiences. Because whether they are interested in AI or not, their lives may be already being impacted by this conversation and the developments behind it.

—Do we really want the systems that has intentionality, conscious, and free will?—

Let’s ask why Turing Test is problematic. For this, we will dig a bit more into what a machine, or more specifically, a computer does with information processing. At its core, a computer takes some input (data), runs a set of instructions (algorithm), and produces an output. We can use this computer to teach Chinese. All we have to do is to create a program that takes various forms of Chinese character inputs and generate corresponding Chinese character outputs. Imagine further that we do this so well that the computer in question is able to generate very fluid outputs in natural Chinese language and fool a Chinese speaking user in believing that they are really interacting with another Chinese speaking human. Thus, this computer passes the Turing Test. Does this mean that this computer really understands Chinese? This is the question John Searle, a famous American philosopher, asked, as he presented this thought experiment, which is famously dubbed as the Chinese room.

Searle provides us with two views of AI: strong and weak. If the computer in the Chinese room experiment really understands Chinese language, it is a case of strong AI. If that computer, on the other hand, simply simulates that understanding, it is a case of weak AI. Searle further argues that it is not possible for a computer to really understand anything, let alone Chinese. Why? Because if an English-speaking human who doesn’t know any Chinese is given an English-Chinese dictionary and enough instructions, they could take Chinese input and produce Chinese output, but we won’t call them really speaking or understanding Chinese. Similarly, just because a computer could produce a convincing output, it can’t be said to understand what it’s doing. Therefore, while this computer (or that human) passes the Turing Test, it does not give us a strong indication of intelligence.

Originally proposed in 1980 (Minds, brains, and programs. Behavioral and brain sciences, 3(3), 417-457.), this thought experiment of Chinese room, I believe, is very relevant once again. As the recent news broke out about an engineer at Google thinking that their language-model based conversational system, called LaMDA, had become sentient, people started asking — can this really be possible? Google itself, and many others, quickly argued that this engineer was mistaken and the system in question hasn’t gained consciousness. But there was a palpable excitement (and perhaps some fear) in the mainstream media and general public from this news.

Throughout the history of AI, there have been systems that have given out the vibe of intelligence or consciousness. Even I wanted to do something that ‘cool’ when I was a sophomore undergraduate in Computer Engineering. So I built a chat program that one could converse in natural language (text only) and get natural language responses. That program, written in C, was meant to give you an impression of understanding, but all it did was to parse the input and brought out canned responses with some probabilities mixed in for variations. It wasn’t meant to pass the Turing Test or solve real-world problems, but it was fun to build. That was a long time ago. Systems like LaMDA are built using really large corpora and are much more sophisticated in generating their responses. If such systems can write poetry and paint pictures, they can certainly produce phrases and sentences that sound very realistic and also seem like coming from a deeply thoughtful person. In other words, these systems could pass the traditional Turing Test, but as Searle would have us ask — does it really mean the system understands anything, let alone has concious?

I would add to that an even more important question — do we really want the systems that has intentionality, conscious, and free will? If the answer is ‘no’, we don’t have to worry about Turing Test or Searle’s philosophical objections. We can just build AI systems that perform tasks that help us solve actual problems — from finding useful information to detecting early stage cancer. These are cases of Artificial Narrow Intelligence. But if we are inclined to invent an AI  system that understands and feels like humans (called Artificial General Intelligence), we have to think about the ramifications of ‘yes’ to these questions. What purposes will such systems serve other than doing multiple tasks? Would they be effective enough if we don’t worry about them gaining ‘consciousness’? If not, what does that ‘consciousness’ mean here? What do we have to lose by chasing this, hyping this, and perhaps achieving this?

While we ponder on these longer-term vision questions, there are some immediate problems and questions to address. This recent example with the hype about Google’s LaMDA has shown us that even a well-trained professionals lack an understanding of what these AI systems are capable of. In this particular case, the engineer thought the system had gained consciousness because it was starting to sound so much like a human with deep feelings and beliefs. But that’s the Wizard-of-Oz trick — making it sound like there is a real magic here, whereas all we have is much too mundane. If a trained professional can make such a mistake, so do many average users. There is a danger in trusting these systems beyond their actual capabilities. They may seem to be passing a test of intelligence (Turing Test or not), but they don’t really understand anything. This is an illusion. Sure, sometimes the illusion can be so powerful that we take it as a reality. But that’s all there is to it. We need to be aware of these hypes, parlor tricks, and grand-scale illusions, so we use these systems as they are — tools for us — rather than our mentors, or god forbid, messiahs.

Cite this article in APA as: Shah, C. (2022, June 29). Sentient AI—Is that what we really want? Information Matters, Vol. 2, Issue 6. https://informationmatters.org/2022/06/sentient-ai-is-that-what-we-really-want/

Author

  • Chirag Shah

    Dr. Chirag Shah is a Professor in Information School, an Adjunct Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. He is also the Founding Editor-in-Chief of Information Matters. His research revolves around intelligent systems. On one hand, he is trying to make search and recommendation systems smart, proactive, and integrated. On the other hand, he is investigating how such systems can be made fair, transparent, and ethical. The former area is Search/Recommendation and the latter falls under Responsible AI. They both create interesting synergy, resulting in Human-Centered ML/AI.

Chirag Shah

Dr. Chirag Shah is a Professor in Information School, an Adjunct Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. He is also the Founding Editor-in-Chief of Information Matters. His research revolves around intelligent systems. On one hand, he is trying to make search and recommendation systems smart, proactive, and integrated. On the other hand, he is investigating how such systems can be made fair, transparent, and ethical. The former area is Search/Recommendation and the latter falls under Responsible AI. They both create interesting synergy, resulting in Human-Centered ML/AI.