InfoFireMultimedia

Towards Responsible AI: Leveraging Multi-partner Engagement—Fireside chat with Carolyn Watters

Towards Responsible AI: Leveraging Multi-partner Engagement—Fireside Chat with Carolyn Watters

Shalini Urs

Trust Deficit

The trust deficit is a term bandied about increasingly these days. Trust deficit characterizes the lack of trust for healthy relationships between people and institutions—police, government, academia, science, the press—and this is universal, happening in every corner of the world. Paradoxically the world is getting increasingly complex, interlinked, and layered, and life is almost impossible without trust. When technology has seeped into our lives in little things—such as asking Siri to set the timer to Google’s Query auto-completion (QAC), and big things—such as autonomous vehicles, trust in technology and systems becomes critical.

The vast potential of AI is neither hyperbole nor a truism but is real as is borne out by evidence across myriad areas—from combatting terrorism to fighting the pandemic. The growth and the impact of AI on the world are extensive. However, when AI algorithms introduce bias, preventable errors, and poor decision-making, it causes mistrust among the very people it is supposed to be helping.

The AI Index is an annual study of AI impact and progress developed by an interdisciplinary team at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) in partnership with organizations from industry, academia, and government reports. The subtitle AI’s Ethical Growing Pains of the latest AI Index (2022) captures the essence of the AI field’s crossroads.

—AI systems also produce more harm along with more impressive capability, and great responsibility comes with great power—

AI Index 2022 notes that AI systems also produce more harm along with more impressive capability, and great responsibility comes with great power. The bigger and more capable an AI system is, the more likely it is to produce outputs that are out of line with our human values, says Jack Clark, co-director of the AI Index Steering Committee. “This is the challenge that AI faces,” he says. “We have systems that work really well, but the ethical problems they create are burgeoning.”

Thus the major challenge is how to responsibly maximize its upside while safeguarding against the dangers and discriminatory decisions. The solution lies in building trust in the relationship between AI, people, and society. Perhaps the answer lies in building responsible AI applications and systems.

What is Responsible AI?

Responsible AI is an emergent area of AI governance. The word “responsible” is an umbrella term covering ethics and democratization of AI, and the end goal is socially responsible AI. Gillis (2021) defines Responsible AI as a governance framework that documents how a specific organization addresses the challenges around artificial intelligence (AI) from an ethical and legal point of view. Resolving ambiguity about where responsibility lies if something goes wrong is an essential driver for responsible AI initiatives. Furthermore, he lists the following seven qualities and principles of Responsible AI: Explainable, monitorable, reproducible, secure, human cantered, unbiased, and justifiable.

The White House Report on Big Data (2014) cautioned that “algorithmic decisions raise the specter of ‘redlining’ in the digital economy – the potential to discriminate against the most vulnerable classes of our society under the guise of neutral algorithms.” While instances of bias (both conscious and unconscious) and discrimination and injustices abound, a few cases make it to headlines, such as ‘Tiger Mom Tax,” and stir the pot. Cheng et al. (2021) argue to think beyond algorithmic fairness and connect major aspects of AI that potentially cause AI’s indifferent behavior to build long-lasting trust between AI and human beings and provide a systematic framework of Socially Responsible AI Algorithms.

Towards Responsible AI

Building socially responsible AI is the grand challenge that organizations working in AI confront since algorithmic bias, injustice, and discrimination were discovered/reported in our digital society.

AI’s Power Needs Guardrails and Framework of Trust is Imperative, cautions the Responsible AI Institute, which is working towards “AI We Can Trust.” According to Responsible AI Institute, when not designed thoughtfully and responsibly, AI systems can be biased, insecure, and not compliant with existing laws, even going so far as to violate human rights. Therefore, more than any technology, it is imperative that AI systems are designed and managed responsibly. AI for Good Foundation,  a non-profit that aims to bring together the best minds and technologies to apply AI research for global sustainable development, identifies five lean design practices to follow in the early stages of the AI lifecycle: Begin by empathizing; conduct Inclusive research; Humans always in the loop; Prototype and Iterate; Promote multidisciplinary teams.

Saptharishi (2022) implores that designing responsible AI applications requires focusing first on the human experience and then aligning what is technically feasible and viable for the business. He further outlines three rules for creating responsible, powerful AI tools by keeping them human-centered: Suit the tech to the problem, not the other way around; Preserve human agency by embracing clarity–and ambiguity; Optimize for accountability by properly contextualizing.

Responsible AI Principles and strategies: Stakeholders’ response

Major stakeholders, the industry, the academia, the governments, and the civil society have been quick to respond and join the bandwagon. Companies, from Tech Giants such as Google and Microsoft to others such as Accenture, EY, and PwC, have articulated their Responsible AI (RAI) principles, standards, guidelines, and playbooks. Google was among the first companies to develop and publish AI Principles. Since then, it has been updating them and formed a new center of expertise on Responsible AI and Human-Centered Technology within Google Research in 2021, devoted to developing technology and best practices for the technical realization of the AI Principles guidance.

Omdena, a platform to bring together the global AI community with impact-driven organizations to educate, innovate, and deploy real-world AI solutions, lists the top ten foundations that leverage Artificial Intelligence to address real-world problems.

Governments worldwide have also responded with initiatives, plans, and policy documents on RAI.

In the US, the National Artificial Intelligence Initiative (NAII) was established in 2021 as a coordinated program across the entire Federal government to accelerate AI research and application for the Nation’s economic prosperity and national security. It includes Trustworthy AI  as one of its six strategic pillars of work. In Europe, the EU has been working towards  RAI since 2018, and in 2019, The Ethics Guidelines for Trustworthy Artificial Intelligence (AI) was published. The UK National AI strategy has three pillars, and AI governance is one of them. The Centre for Data Ethics and Innovation is a government expert body enabling the trustworthy use of data and AI.

Building on the National Strategy on AI, the National Institution for Transforming India (NITI Aayog) of the Government of India brought out a  two-part approach paper in 2021: the first part of the strategy titled “Towards Responsible AI for All,” which aims to establish broad ethics principles for design, development, and deployment of AI in India – drawing on similar global initiatives but grounded in the Indian legal and regulatory context. The second part of the strategy explores the means of operationalizing principles across the public, private, and academia. Within this framework, the Indian government hoped that AI could flourish, benefitting humanity while mitigating the risks and is inclusive, bringing the benefits of AI to all.

The Australian government committed to ensuring that all Australians share the benefits of artificial intelligence (AI) brought out the Artificial Intelligence Action Plan, which was part of the 2021-22 Budget Plan. In addition, the Artificial Intelligence (AI) Ethics Framework guides businesses and governments to design, develop and implement AI responsibly.

Beyond Virtue Signaling: AI ethics everywhere

As the AI Index 2022 notes, RAI has now moved beyond virtue signaling if the number of peer-reviewed publications is a measure of growth and impact.

“Research on fairness and transparency in AI has exploded since 2014, with a fivefold increase in related publications at ethics-related conferences. Algorithmic fairness and bias have shifted from primarily an academic pursuit to becoming firmly embedded as a mainstream research topic with wide-ranging implications. Researchers with industry affiliations contributed 71% more publications yearly at ethics-focused conferences in recent years.”

A simple Google Scholar search for “Responsible AI” reveals a tremendous increase in the number of publications since 2018. The number rose from 75 publications in 2018 to 301 in 2019, then to 988 in 2020, to 2390 in 2021—an increase of more than 30 times. There are 1530 in the current year 2022.

Carolyn Watters on RAI and leveraging multi-partner engagement

In this episode of InfoFire, I am conversing with Dr. Carolyn Watters, Professor Emeritus, Faculty of Computer Science,  Dalhousie University. Watters has served as the inaugural Chief Digital Research Officer of the  National Research Council of Canada for two years (2019-2021) and earlier as Provost & Vice President Academic, Dalhousie University.

Our fireside chat begins with Watters giving a little primer on AI. According to Watters, AI is the simulation of human intelligence, as it mimics what we do. This notion of augmented intelligence means we use our algorithms to complement it, work with human intelligence, and give us the right stuff at the right time to make better decisions. However, the decisions are made by people. So, it is more of a hybrid, and machine learning is just an algorithm and part of AI, not a separate thing. It is one of the tools that AI uses, and sometimes we lump all that stuff in the same bag, she says. For Watters, RAI is about building frameworks based on human principles of accountability. When we write programs, we are not too worried about accountability; we want it to work and do things efficiently and on time without bugs. However, when decisions are made, who is accountable for the decisions? 

Therefore, we need frameworks, and once you have a framework, you can think about compliance. It is about the regulations governing bodies need to have in place so that the responsibility is not just the program or how it is used, how it gets in the economy, how it affects poor people, how it affects women, all the way up and down. It is about transparency. The people who are affected by these algorithms, can they understand what it is?

Second is this whole notion of equity and fairness. Who benefits from this? What data is used? Was it fair? Are all people represented or just those you want to sell something to? Does it allow access for people who currently (in Canada and everywhere) do not have enough bandwidth? Who do not understand how to use it.

It is vital to ensure that it does not discriminate against people because of bias in the algorithm, the data, and of course, the notions of safety, security, and privacy. How secure is our data? How safe are we when we are in an automated system? Furthermore, the last important part is this notion of governance. How do we come to grips with governing a world driven by gazillion line algorithms?

Listen to Carolyn Watters’s passion and strategies for creating a level playing field with the help of AI so that everyone can have equal access and opportunities for leveraging technologies. The policy framework, the governance, the regulation, the regulatory framework, and putting processes and systems in place bring in the change in a desired manner or direction that we would like to have, whether it is the equity issue or the bias issue. It can somehow be managed not by the algorithm or the technology but by its governance. It is about bringing as many voices and perspectives to the table. The government and businesses are not doing enough. Social advocacy and academic researchers drive RAI. The governments and the industry are now responding to these voices. It is our responsibility to ensure that the AI systems that provide significant benefits are not, at the same time, destroying the social fabric.

References

Angwin, J., Mattu, S., & Larson, S. (2015. September 1). The Tiger Mom Tax: Asians Are Nearly Twice as Likely to Get a Higher Price from Princeton Review. ProPublica. Retrieved from https://www.propublica.org/article/asians-nearly-twice-as-likely-to-get-higher-price-from-princeton-review)

Arora, N., Banerjee, A. K., & Narasu, M. L. (2020). The role of artificial intelligence in tackling COVID-19. Future Virology15(11), 717-724.

Cheng, L., Varshney, KR, & Liu, H. (2021). Socially Responsible AI Algorithms: Issues, Purposes, and Challenges. J. Artif. Intell. Res., 71, 1137-1181.

Gillis, A.S. (2021, January). responsible AI. https://www.techtarget.com/searchenterpriseai/definition/responsible-AI

Miller.K. (2022. March 16). The 2022 AI Index: AI’s Ethical Growing Pains. https://hai.stanford.edu/news/2022-ai-index-ais-ethical-growing-pains

Saptharishi. M. (2022, January 1). Responsible AI can’t exist without human-centered design. Fortune. Retrieved from https://fortune.com/2022/01/21/responsible-a-i-cant-exist-without-human-centered-design-artificial-intelligence-tech/

United Nations Counter-Terrorism Centre (UNCCT) and the United Nations Interregional Crime and Justice Research Institute (UNICRI) (20. Countering Terrorism Online With Artificial Intelligence An Overview For Law Enforcement And Counter-Terrorism Agencies In South Asia And South-East Asia: A Joint Report by UNICRI and UNCCT. United Nations Office of Counter-Terrorism: New York.

US. Executive Office of the President. (2014). Big Data: Seizing Opportunities, Preserving Values. https://obamawhitehouse.archives.gov/sites/default/files/docs/big_data_privacy_report_may_1_2014.pdf

Cite this article in APA as: Urs, S. (2022, June 29). Towards responsible AI: Leveraging multi-partner engagement—fireside chat with carolyn watters. Information Matters, Vol. 2, Issue 6. https://informationmatters.org/2022/06/towards-responsible-ai-leveraging-multi-partner-engagementfireside-chat-with-carolyn-watters/

Shalini Urs

Dr. Shalini Urs is an information scientist with a 360-degree view of information and has researched issues ranging from the theoretical foundations of information sciences to Informatics. She is an institution builder whose brainchild is the MYRA School of Business (www.myra.ac.in), founded in 2012. She also founded the International School of Information Management (www.isim.ac.in), the first Information School in India, as an autonomous constituent unit of the University of Mysore in 2005 with grants from the Ford Foundation and Informatics India Limited. She is currently involved with Gooru India Foundation as a Board member (https://gooru.org/about/team) and is actively involved in implementing Gooru’s Learning Navigator platform across schools. She is professor emerita at the Department of Library and Information Science of the University of Mysore, India. She conceptualized and developed the Vidyanidhi Digital Library and eScholarship portal in 2000 with funding from the Government of India, which became a national initiative with further funding from the Ford Foundation in 2002.