What is AI?
What is AI?
Chirag Shah, University of Washington
I was recently asked to serve on a panel at a town hall meeting where we would be talking about Artificial Intelligence or AI. The central question was about defining AI or what AI means to us. There are at least three types of answers I see here—theoretical, practical, and philosophical.
We will start with the theoretical definition of AI. This is what I would use in an introductory class of AI or machine learning.
AI is a field of research and development that aims to build computational systems that could perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.
To a budding undergraduate student, this seems quite straightforward, and in a way, it is. AI is concerned with mimicking human cognition and behaviors in an effort to have artificial systems perform and even excel at tasks that are traditionally human endeavors. But what does this really mean in practice? That’s where the more practical definition of AI comes in.
—Before we talk about someone else's intelligence, shouldn't we know enough about our own intelligence?—
AI is a computational system that automates or augments human capabilities.
This is a practical definition of AI, which doesn’t care if the said system is acting like humans or not. Rather, it cares about the utility of that system—does it get job done? Does it do something that replaces or enhances human tasks? It takes inspirations from human abilities, but does not bother itself with mimicking them. Think about an airplane (which is not an example of an AI, at least not yet)—it takes inspiration of flight from birds, but doesn’t have flapping wings.
Microsoft AI CEO Mustafa Suleyman calls this ACI—Artificial Capable Intelligence. I would even drop ‘intelligence’ from this because (1) a lot of capable systems lack any real measure of intelligence; and (2) we don’t have an agreement on what ‘intelligence’ really is. And that brings us to the third view of AI.
Before we talk about someone else’s intelligence, shouldn’t we know enough about our own intelligence? I remember some years ago my four year old daughter being able to remember 10-digit phone numbers and us thinking and bragging about how smart she was. Well, she was and is smart, but is being able to remember large numbers or doing long divisions in your head a good measure of smartness or intelligence? Not really. While we teach kids in elementary school these basic skills of multiplication and division, when faced with real-life complex calculations, we reach for a calculator. That’s smart. Yes, the right or the smart thing to do for such tasks is to get the right tool. The calculator in this case is an extension of our ability to accomplish something. It’s not that we can’t do those complex calculations ourselves, but we could save a lot of time and possible mistakes by delegating it to a tool. If a human were to do long divisions in a fraction of a second, we would consider them superhuman, but if a calculator does it, that’s not impressive. The same goes for search autocompletion, real-time translation of real-world images from any language to any other, and onboard computer from 1970s still helping Voyager spacecrafts navigate the interstellar space. A generation or two ago these would have been impossible to fathom, but now we take them for granted.
And so it is with seemingly intelligent apparatus. They stop being ‘intelligent’ for us as soon as they prove to be capable for tasks we do regularly. We keep moving the goalposts for what could and should be considered ‘intelligent’. Active researchers in AI often complain about how this is cheating; anytime anything in AI gets solved, it moves to a different field and AI doesn’t get the credit. I don’t believe we are moving these goalposts or taking solved things out of AI because we are cheating. I think it’s because we don’t have a static notion of ‘intelligence’. It has been a mirage and we have come to accept it that way. Any time that mirage turns into a reservoir, we turn elsewhere in search for that illusive source of water. Our thirst will never quench.
This is not necessarily a bad thing. I see it as a spiritual journey that I’m extremely privileged to be a part of and I hope more people join. We should be constantly on that self-discovery path, even if there is no destination. What I find problematic is using that worthy endeavor to diminish what AI has been and what it could be. I don’t care about the term AI itself. We can call it something else, like task-oriented systems or computational capabilities. But thanks to John McCarthy and generations of researchers, we are stuck with AI. That’s ok, but let’s not split hair over what’s AI and what’s not. Instead, focus on how we could build capable systems that could help augment and automate tasks for us, like a calculator, search suggestions, on-demand translation, self-driving cars, and factory robots. It’s ok if these things don’t think, feel, or behave like humans. In fact, often it’s best if they don’t. I just want them to get things done—reliably and responsibly.
So, in short, I want to leave you with three messages. First, stop arguing over terminologies. AI is not the best name for all that we currently label with ‘AI’. But it’s the name we are stuck with. Instead of arguing over the name, focus on what we want to get out of anything that we call ‘AI’. Second, it is in the nature of a capitalist society to channel its time, effort, and attention to where the capital is. We did that with the gold rush, beanie babies, crypto mania, oil mining, and now we are doing it with AI. If that’s where funds and resources are found, researchers, innovators, and developers will shape their narrative to fit to those calls. This is not unprecedented, nor it’s the last time we do this. Wait till waves of quantum computing, neurobiology, and augmented living start. We will repeat this. We will reconstruct our narratives to match those calls. And third, don’t let these arguments about ‘AI or not’ or even ‘what do you mean by AI’ get in the way of asking really important questions about our own identity and destiny.
Cite this article in APA as: Shah, C. What is AI? (2024, October 23). Information Matters, Vol. 4, Issue 10. https://informationmatters.org/2024/10/what-is-ai/
Author
-
Dr. Chirag Shah is a Professor in Information School, an Adjunct Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. He is also the Founding Editor-in-Chief of Information Matters. His research revolves around intelligent systems. On one hand, he is trying to make search and recommendation systems smart, proactive, and integrated. On the other hand, he is investigating how such systems can be made fair, transparent, and ethical. The former area is Search/Recommendation and the latter falls under Responsible AI. They both create interesting synergy, resulting in Human-Centered ML/AI.
View all posts