The true cost of free search

True Cost of Free Search

Chirag Shah, University of Washington

With all the attention that Facebook is getting recently, we often forget that the problem of biased algorithms is not limited to  social media. Let’s talk about search engines. We use them all the time. Aren’t they wonderful? With a few keystrokes and clicks, we get access to a whole world of information. It’s so easy, it’s so simple, and it’s all for free. Isn’t it?

No, and that’s where the big misunderstanding is. Not about the ease or simplicity, but about the cost. We don’t seem to be paying for these services. Isn’t Google so generous to give us these wonderful tools like Google Search, YouTube, and Google Assistant all for free? No wait, they place ads, you say. Oh, so that’s how they make money. OK, so that’s not out of ordinary. We have seen this before. We get “free” TV programs, but they insert commercials and that’s how they make money. Sure, it all adds up. No. That is not the only way Google and others make money.

—It's so easy, it's so simple, and it's all for free. Isn't it?—

Let’s go back to that TV example. When a network shows you commercials during a program, it doesn’t know personal data about you. Sure, it knows the general demographics of what that program appeals to or who the key audiences are and based on that, sells and shows commercials. But it has no knowledge of what you were watching before, if you really paid attention to one of its commercials or took a bathroom break, or have access to your purchase history. It is also not able to track you from one program to another or one network/channel to another. But Google can do all that, and more.

Chances are you know (at least partially) how this is done. Cookies. Yes, cookies. They have been around for decades and intended to enhance a user’s experience in different ways—from autofilling a form to speeding up loading pages. Those were the yesterdecades. Cookies have been increasingly used for tracking users across sites and services in order to create a more comprehensive view of their behaviors and preferences. On the surface, that may seem a good idea as now the advertisers can provide more targeted ads—ads that are actually relevant to you than just being some random distractions. But once again, that’s not it. The models behind the scene that connect these dots have been figuring out our pain points. They know that if you are idly browsing around late at night and reading celebrity gossip, perhaps you are bored and would react strongly to junk food, get-rich-easy schemes, and a diet pill that can change your life. What they are doing is exploiting what’s known as dark patterns. This is where the user interface design, business objectives, and psychology intersect in order to target individuals in ways not done before and have them make impulsive or emotion-driven decisions.

Sure, there are some efforts to curb the invasion of cookies, like what we have seen done by the European Union, what’s known as the cookie law. But people often don’t understand enough about these technologies to make real informed decisions. We need warning labels like what we find on packs of cigarettes: “Continuing to participate in this site may cause your personal liberties and identities to be compromised. Do you really want to proceed?”

Now, let’s go back to that idea of nudging people to make emotion-driven rather than rational decisions. Ad-based and attention-based services thrive on this. Companies such as Google, Facebook, Twitter, and Amazon need you to spend as much time as possible on their sites—browsing, clicking, and sharing. Some of that is for direct sales. The more time you spend in an e-commerce site, the more likely you are to make a purchase. Traditional stores have known that for decades. They do various things to make sure you step in the store and stay there longer. Having enticing items or a big sale sign at the front and having special items in the back are part of that trick. But the online services go beyond that. Their business objectives are directly associated with how much you are engaged on the site. The more you visit and the more time you spend, the more ads they can show, the more data they can collect (and sell), and the more chances of retaining you as their customers. So, not surprisingly, their algorithms are designed to optimize on user engagement. On one hand, this may seem like a good idea—of course, if we are engaged, that’s a good thing, right? Not necessarily. That’s because we react very strongly to emotions like hate and fear a lot more than we react to relevant information or positive emotions; ergo, in order to increase traffic and engagement, increase the exposure to such stories. Don’t get me wrong—nobody is sitting behind the scenes coding this. But, and here’s the most important point—the algorithms that drive content ranking and presentation are driven by business objectives of engagement, which makes them learn (implicitly) that hate, fear, and conspiracy theories are good.

It is easy to target bad actors or propaganda machines. Google and Facebook will promise you to remove them and even flag and remove bad content. But we won’t see real change until two things start happening: (1) these services need to change their algorithms to not be tuned to user engagement because it is flawed and dangerous; and (2) we, the users, need to start voting with our clicks and visits. Stop feeding the monster.

Both of these are extremely hard. Somehow I don’t see the big tech abandoning the way their algorithms learn what to recommend, even after we could regulate them in some way (which itself is a herculean task). Therefore, we need external auditing of these algorithms. These audits will then need to be made available to all users. Perhaps we don’t get to put a giant label with a skull like we do with cigarettes, but even a simple awareness of what these algorithms are doing could cause people, at least some people, to act differently.

Finally, I don’t want this to be an “us vs. them” battle. I don’t want to get rid of Facebook, Google, or Twitter. There are lots of benefits to individuals and societies from these services. And they all started out with those noble objectives. But then the pressure of constant growth mixed with business-driven goals changed the dynamic to what is truly good and relevant. I want these companies and products to start taking responsibility and disconnect their business objectives from how their algorithms learn and optimize. I want them to be transparent. I want them to help us not get addicted to them. I want to work with them so they exist without destroying our democracy and free will. Yes, doing so will affect these companies’ bottom-lines. They will still be profitable, but not make as much money.

In the meantime, let’s accept that these services are not free. Just because we are not paying for them directly, it doesn’t mean they don’t cost us anything. On the contrary. They are costing us anxiety, eating disorder, low self-esteem, fear of missing out (FOMO), and even our lives. There indeed is no free lunch.

Cite this article as: Chirag Shah, “The true cost of free search,” in Information Matters, October 16, 2021, https://informationmatters.org/2021/10/the-true-cost-of-free-search/.

Author

  • Dr. Chirag Shah is an Associate Professor in Information School, an Adjunct Associate Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Associate Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. He is also the Founding Editor-in-Chief of Information Matters. His research revolves around intelligent systems. On one hand, he is trying to make search and recommendation systems smart, proactive, and integrated. On the other hand, he is investigating how such systems can be made fair, transparent, and ethical. The former area is Search/Recommendation and the latter falls under Responsible AI. They both create interesting synergy, resulting in Human-Centered ML/AI.

Chirag Shah

Dr. Chirag Shah is an Associate Professor in Information School, an Adjunct Associate Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Associate Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. He is also the Founding Editor-in-Chief of Information Matters. His research revolves around intelligent systems. On one hand, he is trying to make search and recommendation systems smart, proactive, and integrated. On the other hand, he is investigating how such systems can be made fair, transparent, and ethical. The former area is Search/Recommendation and the latter falls under Responsible AI. They both create interesting synergy, resulting in Human-Centered ML/AI.

Leave a Reply