Responsible AI: Whose responsibility is it?
Responsible AI: Whose Responsibility Is It?
Chirag Shah, University of Washington
In 2018, I spent several months at Spotify as a Visiting Researcher during my sabbatical from Rutgers University. Among the projects that I got involved in there, one was quite new for me. You see, other projects were about using my background in search, recommender systems, and machine learning to new problems. But there was this one project that was different. It was about addressing bias and bringing fairness in recommendations. Of course, I had come across the issue of bias in machine learning before, but it was usually from a statistical point of view. For example, when you have unbalanced classes in a classification problem, you may draw wrong conclusions. Disparity in representation could lead to what’s known as Simpson’s paradox.
—It turns out that addressing bias and bringing fairness is not only a right thing to do from social or statistical standpoint, but also a good thing for business.—
On the other hand, we all know the notions of bias and disparity in social contexts. What I was discovering at Spotify is how we could connect these two notions of bias—from statistical side and from social side. OK, so what’s new there? Well, it turns out that addressing bias and bringing fairness is not only a right thing to do from social or statistical standpoint, but also a good thing for business. Spotify, like other streaming platforms, pays the artists or right holders for content based on how much it gets streamed. Now, imagine if all its users just wanted to listen to the most popular (and thus, most expensive) artists. That would not be sustainable for Spotify. They need to make sure that the users diversify their taste in music and also listen to the bottom 99% artists as well. This is good for those artists too, because without Spotify and others representing and recommending them to their users, they may not be discovered easily and may not earn enough from streaming revenues to continue producing their art.
So here it was—a perfect example of how societal good aligns with business objectives. It was a right thing to do and it was a good thing to do. But what about all the other times and places where these two don’t align? Who is responsible for addressing bias and ensuring fairness?
Since my days at Spotify, I have discovered and engaged in an emerging field of Responsible AI. What is it? Well, it depends who you ask, but usually it involves topics like the one above—addressing bias and diversity, bringing fairness in AI systems, and being ethical in how such systems get designed, deployed, and used. In addition, Responsible AI also includes notions of transparency or accountability, privacy, and robustness. There are existing and emerging solutions for them. For example, a lot of researchers (including myself) have been working on creating explanations for blackbox AI systems in order to make them transparent and trustworthy.
In fact, now we have many scholars and practitioners deeply invested in these issues—pertaining to AI, but in general, with any computational systems. There are numerous programs, centers, policies, and initiatives created (including some that I had a part in like FATE and RAISE). While some of these have been around for a while, most have happened in just the last 3-5 years. These efforts, while mostly done with good intentions, are often reflective of lack of a clear vision or understanding of what responsible or ethical AI really means. Let’s do a quick look-around to see how some of the big tech companies have been addressing Responsible AI.
Microsoft created AI, Ethics, and Effects in Engineering and Research (Aether) Committee, which came up with six AI principles: fairness, inclusiveness, reliability and safety, transparency, privacy and security, and accountability. Google ties their responsible AI practices to that of responsible software development, emphasizing aspects such as fairness, interpretability, privacy, and security. Facebook recognizes the following five pillars for responsible AI: privacy and security, fairness and inclusion, robustness and safety, transparency and control, and accountability and governance.
If you look around, most tech companies—big and small—have started acknowledging these issues and laying out a framework, at least in principle, to address them. But the big challenges remain. Not only these frameworks are often incomplete or without any measurable actions, but they also tend to be disconnected from the business objectives these companies have. This makes them hard for the employees and partners to take any meaningful actions to accomplish the lofty (and theoretical) goals. And then there are the regulators who are quite blindsided by these developments and still focused on some of the “big tech problems” (case in point: Facebook papers). Their efforts are often reactive, shortsighted, and inadequate to really address a problem. How much can we count on them to address ethics in AI systems?
Finally, let’s not forget those of us who are educators. We are trying to keep up with new problems emerging as the tech development in AI field continues at an unprecedented speed. I have often complained about how we are spending so much of our time and energy in putting out fires that are set by some of these big AI companies. For example, my lab has been engaged in work to de-bias search results from Google. Wouldn’t it be better if these results were not so biased to begin with? As an educator, I would rather spend my energy educating the next generation of students and developers who would build search and recommender services with ethical considerations than keep fixing the problems created by trillion dollar companies.
But I don’t want to whine about this (anymore). This is not a simple situation and there are no simple solutions. We all have inherited and created this problem, and it is all our our responsibility to address it. Pointing fingers won’t get us far. We learn, we share, and we work together toward a vision of AI that is humane and inclusive.
Responsible AI is an emerging area with lots of attention given by regulatory agencies, watchdogs, and governance policies, and will continue seeing new developments in the coming years. It’s alright that we don’t have a consensus, even about some of the basic problem definitions, let alone about possible solutions. But I’m glad we are starting to have some serious conversations about these issues. Responsible and Ethical AI is too big and too important to be left to any individual sector—public, private, or educational. It is our collective responsibility, and I hope wherever we go from here will be done together.
Cite this article in APA as: Shah, C. (2021, November 21). Responsible AI: Whose Responsibility Is It? Information Matters. Vol.1, Issue 11. https://r7q.22f.myftpupload.com/2021/11/responsible-ai-whose-responsibility-is-it/