EditorialFeatured

The Questions We Are Not Asking Enough About AI

It was a crisp December morning—the kind that could refresh you and perhaps fill you up with a renewed energy for the day ahead. But this was not that kind of a day. In just a few hours you are going to face the world champion in a sport that you have never played before. That should bring chills down your spine or give you a panic attack. The sport is chess, the world champion is a software named Stockfish, and you are a new program called AlphaZero, developed by a little known startup named Deep Mind.

Why would you do this? Or rather, how would you do this? Can you even do this? Having never played chess, not even know how to play chess, and you get faced with none other than the world’s best chess player? Sure, that chess player is not a human, but perhaps that’s even scarier—it will not get tired or intimidated, and it has insurmountable computing ability. That’s true—Stockfish could compute over 70 million positions per second. Humans can’t come close to matching it. In fact, humans have been long left out of these world champion debates when it comes to chess.

—What exactly is the benchmark for intelligence?—

Back in 1997, Garry Kasparov, a Russian chess grandmaster and then the reigning world champion, was famously defeated by IBM’s supercomputer Deep Blue. Since then, chess programs continued evolving—getting faster, smarter, and efficient. You didn’t need a supercomputer anymore to run these programs. Sure, you still need a very capable computer, but the computers were getting faster, cheaper, and much more capable. By the time the world overcame the fears of Y2K, the best chess players in the world were not humans; they were computer programs and Stockfish was the best one.

Move forward 20 years and you have yet another challenger on the block, except that this new software had no clue how to play chess at all. And yet, here we are two decades after that historical milestone with Kasparov vs. Deep Blue, witnessing another significant moment in the history of chess—Stockfish vs. AlphaZero. Out of 100 games, AlphaZero won 28 and drew 72. And remember, it didn’t even know how to play chess just a few hours before the first game. A year later, AlphaZero defeated Stockfish in a 1,000 games match.

The whole chess world was in disbelief. Kasparov said, “It’s a remarkable achievement…. It approaches the ‘Type B,’ human-like approach to machine chess dreamt of by Claude Shannon and Alan Turing instead of brute force.” Compared to Stockfish’s 70 million positions/second ability, AlphaZero could only do 80,000/second. Sure, this is still way more than any human can do, but the point is: the way to develop abilities and perhaps intelligence is not necessarily through more computational power. But more importantly, what exactly is the benchmark for intelligence?

Kasparov referred to Turing, who gave us the famous Turing Test. According to that test, if while conversing with a system behind a curtain we can’t tell if it’s a human or a machine, then that system passes the test of intelligence. There are several problems with this kind of testing, some of which I have already outlined before, so I won’t go into them right now. But what matters is for us to think about what really is intelligence. That’s the first question of many questions we need to ask as the world around us has started to get flooded with increasingly more AI-driven systems.

There is a lot to be hopeful for with these AI developments, and perhaps a lot to be fearful about them as well. But in the midst of these two polar discussions, we have not asked some of the most fundamental questions. Here I am listing a few of them.

  • What does it mean to possess intelligence? If a system passes the Turing Test, does it really have that intelligence? If that system is really great at pretending to be intelligent, can that still be called intelligent?
  • What do we, the humankind, want to do with the development of intelligent entities? Is it to make our lives easier or better? Is it to free us up from our existing cognitive tasks so we could do some other things we are not able to do currently? Is it for us to better understand our own intelligence?
  • We may not have Artificial General Intelligence right now, but we are getting there. Should we really go there? What is the purpose of having an entity that can be deemed as possessing general intelligence like us?
  • How do we ensure that the benefits of AI are spread as equally as possible across various cultures, languages, and demographies? The internet has been around for decades and yet we talk about digital divide, because there is still an equity and accessibility problem there.
  • What are the implications on environment from AI arms race? Many of the current implementations of building blocks, such as LLMs, have large carbon footprints. These could get better with time, but how will we ensure that we don’t harm the planet while trying to elevate its most prominent species?

There are no easy or quick answers for these questions, but if we don’t start asking them and ask them enough, there is no hope that somehow the answers will emerge. We also have the opportunity of asking these questions before the answers are decided for us and it’s too late to discuss or debate any of these points. So as we keep advancing AI, let’s not always get lost in promises and perils of technology. Let’s also ask enough questions about what we are doing and why.

Cite this article in APA as: Shah, C. (2023, May 24). The questions we are not asking enough about AI. Information Matters, Vol. 3, Issue 5. https://informationmatters.org/2023/05/the-questions-we-are-not-asking-enough-about-ai/

Chirag Shah

Dr. Chirag Shah is a Professor in Information School, an Adjunct Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. He is also the Founding Editor-in-Chief of Information Matters. His research revolves around intelligent systems. On one hand, he is trying to make search and recommendation systems smart, proactive, and integrated. On the other hand, he is investigating how such systems can be made fair, transparent, and ethical. The former area is Search/Recommendation and the latter falls under Responsible AI. They both create interesting synergy, resulting in Human-Centered ML/AI.