Editorial

Finding Fairness with a Fickle Framing

Finding Fairness with a Fickle Framing

Chirag Shah, University of Washington

In 2018, I spent several months at Spotify in New York City during my sabbatical from Rutgers University. There I came across a notion of “fairness.” No, I didn’t mean I heard about fairness for the first time. Of course, I knew fairness as a social construct and even a statistical concept. But this was the first time I learned about fairness in the context of a recommender system. Spotify, like many other internet services, is a multi-side application. There are listeners (users, customers) on one side, and artists (studios, producers) on the other side. Invariably, pleasing one side could harm the other side. For example, most users want to listen to the popular music (that music is popular for a reason!) like today’s hottest hits or artists. It’s no surprise that the day of or the day after a Super Bowl (US football championship game) every year, the artist(s) that perform during the halftime show gets a huge uptake of their music. So, if we want to please our users, we should recommend such music that is at the top of the charts and prevalent in the popular culture. But if we keep this up, we create a positive feedback loop, whereby the users only want to listen to such music because it’s easily found and that keeps making the same music and artists popular. Besides, this is not fair to the remaining 99% artists and music. To be fair to them, we should highlight their creations too, but now we may have users who think our recommender system is not that great. This problem of trying to balance fairness for competing parties or objectives is called “marketplace fairness.” At Spotify, we were able to address this sort of fairness with decent success, but this is still an open problem and much need to be done and understood.

—Fairness is often well-understood “in spirit” and less clear when it comes to actual implementation.—

This was my first in-depth engagement with the notion of “fairness” and “marketplace fairness” in recommender systems, but through that work and thanks to many amazing people there, I learned about “fairness” as a notion that was becoming increasingly important for many computational, especially machine learning driven, systems. But what’s not clear is how we should think about—conceptually and practically—this notion. What is fair? How do we measure fairness? How do we improve it? How does it impact other factors that are also important, such as satisfaction, relevance, and utility?

In my days since that sabbatical at Spotify, I have been grappling with these questions. As recent as last week during a presentation by RAISE faculty to incoming PhD students, we got questions about defining and measuring fairness.

Fairness is often well-understood “in spirit” and less clear when it comes to actual implementation. For instance, I doubt anyone would say that they want to build a system or a service that is not fair. But where we may disagree is what is fair. I often give the example of an income tax system. In most countries and localities where people and organizations are expected to pay taxes, there are usually complex, non-linear systems to determine who should pay how much. Typically, we want those with high incomes pay more (often, a lot more), and those below some income not pay at all. This is a biased system, but we consider it to be fair. Or do we? If you ask around, you’re not going to get consensus or anywhere close to it on the validity of this notion of fairness. Not only in any given time do people disagree about what is fair taxation, that notion changes with time (and thus, new tax code is proposed and implemented almost every year).

Is it fair that the user must watch some commercial to enjoy a show on TV? If you ask the viewers, they may have a different view than the TV station or show producers. Somebody has to pay the price for these expensive productions. Maybe we charge people to watch the shows if they don’t want to watch the commercials. There are certainly those services (most streaming services, pay-per-view shows). But then, are we discriminating against those who can’t afford to pay? The same debate has come up in how to pay for internet services. The most prominent model is that of ad-based. Think about Google, Facebook, and YouTube. They provide you “free” services because they could earn enough revenue from ad sales. The other model is what Apple typically does—you don’t get things for free, but you also don’t get ads or your data sold to third parties. Which one is fair?

I don’t believe we can ever come to one decision about fairness in any of these cases. Not because we don’t have the technical capabilities, but because these are social constructs that represent our individual and collective values. Look at things like healthcare, education, and defense systems around the world. Each country in different times have made different deliberations about which of these are rights vs. privileges and who should pay for them (they are all very expensive). These are reflective of their socio-temporal values, which keep getting debated and changed. Without these being set, it is almost an oxymoron notion to codify them in some system.

Does it mean that we should not even work on implementing fairness in some way? On the contrary. It means we need more work, a lot more work. While many definitions of fairness are given and in many ways they are operationalized (including some work by my lab), we need to acknowledge that none of these are perfect and that we should always provide a disclaimer about the reach of these constructs and applications. I suggest that rather than getting caught into the definitions and formations of fairness, we should focus on what impacts we want to create by being “fair” in a given context. For instance, in the case of a tax system, we may have certain social advancement goals (e.g., lifting a population from poverty, providing education to underserved) and so the system of taxation that allows us to do this is fair for that time. But that’s not all. We also need to understand the “costs” of being fair. In this case, it come mean upsetting some class of citizens as they need to pay more into a system that doesn’t benefit them directly.

The work continues. Scholars and practitioners are constantly debating what’s fair, how to measure fairness, and how to implement it in new and existing systems. I’m OK not having a single set of answers to these as long as we continue these conversations, explorations, and debates. Because we will need them to keep framing and re-framing this fickle notion of fairness in our systems.

Cite this article in APA as: Shah, C. (2022, March 12). Finding fairness with a fickle framing. Information Matters, Vol. 2, Issue 3. https://informationmatters.org/2022/03/fairness-with-a-fickle-framing/

Chirag Shah

Dr. Chirag Shah is a Professor in Information School, an Adjunct Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. He is also the Founding Editor-in-Chief of Information Matters. His research revolves around intelligent systems. On one hand, he is trying to make search and recommendation systems smart, proactive, and integrated. On the other hand, he is investigating how such systems can be made fair, transparent, and ethical. The former area is Search/Recommendation and the latter falls under Responsible AI. They both create interesting synergy, resulting in Human-Centered ML/AI.