EditorialFeatured

Can We Really Control AI?

Can We Really Control AI?

Chirag Shah, University of Washington

When I was an undergraduate student in computer science, the book by Stewart Russell and Peter Norvig called “Artificial Intelligence: A Modern Approach” was the textbook for our AI class. But since I was interested in AI beyond what a semester-long course covers, I kept that book on my shelves long after graduation. Years later, it was a treat to read Russell’s writing beyond explaining AI concepts and covering issues of control and alignment as AI proliferates our society. In that context, Russell talks about what he calls the gorilla problem.

Gorillas are our closest relatives still alive and co-existing with us. However, their existence is very much driven by human will and plans. Over the last century, their population has declined significantly. The Mountain Gorillas almost went extinct in the 1980s, but strong conservation efforts have brought their population up to around 1,000. Their survival hinges on what humans decide to do or not do.

This is far cry from ten million years ago when the ancestors of these gorillas were the dominating primates on this same planet. At some point, a genetic mutation created a branch of different kinds of apes that eventually became humans. The rest, as they say, is history. In literal sense here, because the history of humankind starts from that pivotal moment.

The gorillas didn’t have any say in this event of genetic lineage happening that led to beings that were intellectually better. They were not physically any better. In fact, as we know, humans evolved to be physically a lot less capable than their cousins in the other genetic branches. A silverback gorilla can lift up to 1,800 lbs or 815 kgs, while Hossein Rezazadeh of Iran, the world champion of weight lifting, set the record for lifting 580 lbs or 263 kgs. Gorillas can throw with the force of 900 lbs or 450 kgs, whereas a well-trained human can throw with the force of 220 lbs or 100 kgs. In hand-to-hand combat, we are no match to gorillas, and let’s not even think about our ability to climb trees or hang from branches.

—Can we maintain our supremacy and autonomy in a world that includes machines with substantially greater intelligence?—

And yet, the humans are clearly dominating species on the planet while the fate of the gorillas depends on those humans’ whim. Steward asks if given where things are now and if the gorillas had a choice, would they allow such a separate branch of genetic lineage be created? For us, the question is—can we maintain our supremacy and autonomy in a world that includes machines with substantially greater intelligence?

So, in short, do we want to control AI? Yes. Can we control AI? Not really. It’s a bold move we are signing up for—whether we know it or not. We are building these systems hoping that we would still have control over them, but what we have is a blind faith. As Martin Luther King Jr. once said, “Faith is taking the first step even when you don’t see the whole staircase.” We have already taken the first step. We know that it’s dark and we can’t see what’s out there. The only way we could keep going is through that faith that it will all work out in the end and we will have a solid ground to step on for the next step and the step after until we reach a certain destination.

Cite this article in APA as: Shah, C. Can we really control AI? (2024, December 16). Information Matters, Vol. 4, Issue 12. https://informationmatters.org/2024/12/surfacing-the-silent-foundation/

Author

  • Chirag Shah

    Dr. Chirag Shah is a Professor in Information School, an Adjunct Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. He is also the Founding Editor-in-Chief of Information Matters. His research revolves around intelligent systems. On one hand, he is trying to make search and recommendation systems smart, proactive, and integrated. On the other hand, he is investigating how such systems can be made fair, transparent, and ethical. The former area is Search/Recommendation and the latter falls under Responsible AI. They both create interesting synergy, resulting in Human-Centered ML/AI.

    View all posts

Chirag Shah

Dr. Chirag Shah is a Professor in Information School, an Adjunct Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. He is also the Founding Editor-in-Chief of Information Matters. His research revolves around intelligent systems. On one hand, he is trying to make search and recommendation systems smart, proactive, and integrated. On the other hand, he is investigating how such systems can be made fair, transparent, and ethical. The former area is Search/Recommendation and the latter falls under Responsible AI. They both create interesting synergy, resulting in Human-Centered ML/AI.

Leave a Reply