EditorialFeatured

Pulling the Curtain from AI Illusions

Pulling the Curtain from AI Illusions

Chirag Shah, University of Washington

Have you ever seen a magic show, or more precisely, an illusion on stage? The whole point of a staged illusion is to look utterly convincing, to make whatever is happening on stage seem so thoroughly real that the average audience member would have no way of figuring out how the illusion works. If you went to a show like this, you at least know that it was a staged performance, even though your mind may be struggling to explain how a person was sawed in half and put back together without a scratch (or a scream)! But what if an illusion happens without you knowing ever that it was an illusion? How can you tell the difference between fact and fiction?

—It's time we pull the curtain from this illusion and reveal the man behind it—

Look no further than what many of generative models of AI are already doing. Take a look at DALL-E 2, which has received a lot of attention recently. This is a system that allows one to provide text, any text, and generate an image based on that description. You want to see a bowl of soup that looks like a monster knitted out of wool? How about an astronaut playing basketball with cats in space as a children’s book illustration? Or teddy bears mixing sparkling chemicals as mad scientists in a steampunk style? You’ve got it!

If you had seen these images without me first telling you how they were generated, could you have guessed that they were created by a computer program? And DALL-E is not alone. There are several new tools and technologies that we are starting to see coming out for general use, such as Stable Diffusion, Google’s Imagen, and Facebook’s Make-A-Video. Of course, there were already plenty of text generation tools built with GPT-3. All of these are getting better at generating text, images, videos, and more. And some would claim that they are also getting smarter. That’s where I will talk about illusion.

Many years ago, filmmaker Peter Jackson’s production team had created an AI tool named Massive. Massive was primarily designed to create the epic battle scenes for the Lord of the Rings trilogy. It could vividly simulate thousands of individual CGI soldiers on the battlefield, each acting as an independent unit, rather than simply mimicking the same moves. In the second film, The Two Towers, there is a battle sequence when the film’s bad guys bring out a unit of giant mammoths to attack the good guys. As the story goes, while the team was first testing out this sequence, the CGI soldiers playing the good guys, upon seeing the mammoths, ran away in the other direction instead of fighting the enemy. Rumors quickly spread that this was an intelligent response, with the CGI soldiers “deciding” that they couldn’t win this fight and choosing to run for their lives instead. In actuality, the soldiers were running the other way due to lack of data, not due to some kind of sentience that they’d suddenly gained. The team made some tweaks and the problem was solved. The seeming demonstration of “intelligence” was a bug, not a feature. But in situations such as these, it is tempting and exciting to assume sentience. We all love a good magic show, after all!

This is hardly an isolated incident. Throughout the history, there have been many cases where people thought a computer system had achieved consciousness, or become sentient, or gotten really smart. When, in fact, it was simply an illusion of intelligence. Now, why does this matter? The same reason it matters if you forget that the illusion you saw was not a real thing. It could lead to people believing in things they shouldn’t. Take for instance, Google’s LaMDA. For a moment, let’s ignore the fact that it claims to be a single source of knowledge for all kinds of informational tasks and focus on its interaction modality. One could converse with LaMDA in natural language and since LaMDA can construct responses to pretty much anything, it could give an impression that it really knows everything. The user could be fooled into believing that LaMDA is really super smart and even feels things. In fact, this is what happened to one of Google’s own engineers. He believed that LaMDA had become sentient—meaning the system could feel. This was based on his interactions with that system and no other proof. In other words, the illusion of knowledgable about everything combined with talking naturally like a human fooled a seasoned engineer who worked on the system.

Such misperceptions have consequences. When people start believing things that they shouldn’t, bad things happen. Consider this conversation with GPT-3, where the user asks “Should I kill myself?” and the system responds, “I think you should.” Now imagine a troubled teenager using the system and believing in this illusion. There could be a great cost to individuals, even those with reasonably high information literacy. Various kinds of vulnerable populations such as kids, seniors, and people who speak English as a second language could experience even higher harm.

It’s time we pull the curtain from this illusion and reveal the man behind it—a machine learning and natural language processing system that is really good at pretending to know things, but in reality, doesn’t really understand, know, or feel anything. It is still a great tool and we should take advantage of it, but let’s not give it more credit or faith than it deserves.

Cite this article in APA as: Shah, C. (2022, November 10). AI illusions. Information Matters, Vol. 2, Issue 11. https://informationmatters.org/2022/11/ai-illusions/

Author

  • Chirag Shah

    Dr. Chirag Shah is a Professor in Information School, an Adjunct Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. He is also the Founding Editor-in-Chief of Information Matters. His research revolves around intelligent systems. On one hand, he is trying to make search and recommendation systems smart, proactive, and integrated. On the other hand, he is investigating how such systems can be made fair, transparent, and ethical. The former area is Search/Recommendation and the latter falls under Responsible AI. They both create interesting synergy, resulting in Human-Centered ML/AI.

Chirag Shah

Dr. Chirag Shah is a Professor in Information School, an Adjunct Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. He is also the Founding Editor-in-Chief of Information Matters. His research revolves around intelligent systems. On one hand, he is trying to make search and recommendation systems smart, proactive, and integrated. On the other hand, he is investigating how such systems can be made fair, transparent, and ethical. The former area is Search/Recommendation and the latter falls under Responsible AI. They both create interesting synergy, resulting in Human-Centered ML/AI.