In AI We Trust, But Should We?

In AI We Trust, But Should We?

Chirag Shah, University of Washington

When my older daughter was five, she once told me, “Dad, Alexa must be a scientist because she knows everything.” Yes, we have a bunch of Alexa-based devices around the house, but this is not an endorsement of any company or devices. My kids use these devices on a daily basis and they have been doing so since before they knew how to read or write properly. In other words, they had a connection with such technology—voice-based information interaction systems—before they entered kindergarten. More importantly, they believe in what they get from these devices because (1) there is a human voice; and (2) that human seems to know almost everything.

—We often hear people saying that if they don't find something on Google, it must not exist. Or that if they saw something on Google, it must be true.—

One could argue that kids have limited understanding of the world and technology and they tend to humanize everything, so it makes sense that they develop such trust in an AI system. While kids do seem to have this behavior quite prominent (see “Hey Google, Do Unicorns Exist?”: Conversational Agents as a Path to Answers to Children’s Questions), adults have similar trust in AI systems too. In our studies relating to human information behavior, we often hear people saying that if they don’t find something on Google, it must not exist. Or that if they saw something on Google, it must be true. The Information Scientist in me can find this troubling, but when I think about it rationally, I can see why people would want to do this—trust a system that they barely understand.

Most of us don’t really understand the inner workings of the vehicles that we drive. And yet, we trust them to the point where we are risking our lives riding in them. We trust that when we push that pedal, it will move and when we push that other pedal, it will stop. We trust the steering wheel for doing that precise turn we need to do to avoid another vehicle or a pedestrian. We do this every day and have done it for decades now (all of us collectively). Are these vehicles and their parts always follow-through on that trust we bestow upon them? No. There are times when these systems fail and the results could be devastating. And yet, we continue trusting them. But why?

I believe there are three reasons for trusting these systems—our vehicles or many of the AI systems we use everyday, like search engines and movie recommendations.

  1. These systems in general work as expected.
  2. We have little to no incentive to not trust them.
  3. We don’t have good alternatives.

Take for instance your favorite streaming service—movies or music. They are there for you any minute of any day that you want. When you go under a certain area or perform a certain action like a click or a scroll, they do what you expect them to do. Most times, what those services recommend for you to listen to or watch are reasonable, if not great suggestions. You are probably paying for these services and have no reason to believe that they won’t give you what you deserve (in this case, entertainment). Finally, what are your alternatives? In case of music streaming, you may be a part of an existing ecosystem (e.g., Apple, Google). In the case of movie streaming, many platforms have exclusive content that you can’t get elsewhere. Of course, people do cut ties with their services, but they invariably find other services to fill that void. There are too many on-demand streaming services, making it hard to escape them!

OK, so what? These services provide value and we are happy with them. Absolutely. But there are differences between trusting a car and trusting a streaming service without understanding how they work. Your car, unless it has self-driving capabilities, is a deterministic system that you can control. With most AI systems that we encounter every day, there is lack of that agency from our side and such systems are constantly trying to have us give up even more agency, so they can do smarter things. For instance, a while ago, Spotify made changes to its interface making it harder to get to search and more strongly emphasizing its recommender services. Why? Search allows the user to have some agency—they can make a specific request for what they want to listen to. Recommender service allows the system to have more say and control over what the user gets to have access to. As an ultimate realization of this, you can imagine a scenario where you never ask the system what you want, but just take from what it provides you. The key here is trust. If you, as a user, trust the system to give you good, relevant, entertaining content almost all the time without seeing any harms, you may be willing to give up agency and control.

This is not necessarily a bad thing, but there are potential dangers as we develop more trust in these systems and give up agency. We need to question and understand what it means for us as individuals and a society to relinquish control to AI systems, especially when we don’t quite understand how they work, what agenda are driving them, and how our interactions with them will impact our future relationships with them.

Let’s consider that first example of a conversational agent. If you own a smartphone, you have one with you all the time. These agents are also available through smart speakers—at home and in our cars. Thus far, these systems have been quite good with answering factual questions (the reason why my daughter thought Alexa knew everything). But they are not so good with advice or opinions based questions (e.g., “Should I get vaccinated?”). Most times, they will bring up the first result from a web search or dodge the question. But these systems are getting smarter and they want to cover broader areas of information access. It’s only a matter of time before they venture enough into those other kinds of questions, and start becoming more proactive than reacting, giving us suggestions without us explicitly asking. If we have misplaced trust in them (like my daughter thinking that Alexa can’t be wrong because she knows everything), this can be problematic and very harmful. For instance, Alexa was recently found to be telling a 10-year old girl to touch live plug with a penny.

I’m not suggesting that we stop trusting these systems, but that trust warrants a closer examination. And as these systems get smarter and expect us to give up more of our agency to them, it is very important that we develop that the trust we have in them is not blind. From the system developer’s side, it is important that they have their user’s (and perhaps legislator’s) support and engagement as they move towards these smarter and more proactive systems. As they say, trust takes years to build, seconds to break, and forever to repair.

Cite this article in APA as: Shah, C. 
(2022, February 2). In AI we trust, but should we? Information Matters. Vol.2, Issue 1. https://r7q.22f.myftpupload.com/2022/01/in-ai-we-trust-but-should-we/

Author

  • Dr. Chirag Shah is an Associate Professor in Information School, an Adjunct Associate Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Associate Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. He is also the Founding Editor-in-Chief of Information Matters. His research revolves around intelligent systems. On one hand, he is trying to make search and recommendation systems smart, proactive, and integrated. On the other hand, he is investigating how such systems can be made fair, transparent, and ethical. The former area is Search/Recommendation and the latter falls under Responsible AI. They both create interesting synergy, resulting in Human-Centered ML/AI.

Chirag Shah

Dr. Chirag Shah is an Associate Professor in Information School, an Adjunct Associate Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Associate Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. He is also the Founding Editor-in-Chief of Information Matters. His research revolves around intelligent systems. On one hand, he is trying to make search and recommendation systems smart, proactive, and integrated. On the other hand, he is investigating how such systems can be made fair, transparent, and ethical. The former area is Search/Recommendation and the latter falls under Responsible AI. They both create interesting synergy, resulting in Human-Centered ML/AI.