The Explainability Imperative
The Explainability Imperative
Michael Ridley
If artificial intelligence is so smart, why can’t it explain itself? This somewhat flippant question has preoccupied AI developers and researchers from the earliest days. Over 50 years later, the question is still relevant and increasingly urgent. When generative AI can hallucinate with impunity, there is a problem. Explainability is part of the answer.
The field of “explainable AI” (XAI) is “concerned with developing approaches to explain and make artificial systems understandable to human stakeholders.” While this sounds promising, in practice “human stakeholders” has almost exclusively meant system designers and the explanations provided have been highly technical in nature. Tim Miller put the problem bluntly: “AI researchers are building explanatory agents for ourselves, rather than for the intended users … the inmates are running the asylum.”
—If artificial intelligence is so smart, why can’t it explain itself?—
At the risk of overloading with acronyms, XAI has given way to HCXAI.
Focusing on users, the average non-expert, lay user and not system designers, is at the heart of the field of human-centered explainable AI (HCXAI). The key question for HCXAI is simply “explainable to whom?” The “who” in explanations is crucial because as Tania Lombrozo a leading cognitive scientist notes, explanations “are more than a human preoccupation – they are central to our sense of understanding, and the currency in which we exchange beliefs.” While opening the opaque, “black box” of AI is the focus of XAI, Upol Ehsan and Mark Riedl argue that “not everything that matters lies inside the box. Critical answers can lie outside it. Why? Because that’s where the humans are.”
What is outside the box are individuals (users) in specific contexts, AI systems that are value-laden with baked-in perspectives and assumptions, and human-machine interactions that are socially constructed. HCXAI requires that explanations from AI systems are responsive to the user and their context, but it also expects that those explanations support actionability and contestability. The latter mean that the explanation is useful to the user, allowing them to do something or act on the explanation. Further, it also allows the user to question, even contest, the explanation, adding “why?” to the key question of “who?”
This highlights two core principles of HCXAI: reflection over acquiescence and facilitating self-explanation. Explanations should not be merely answers but dialogues with users where passive acceptance is replaced with critical reflection. In part this is achieved by providing sufficient, on-demand information from which the user can construct their own explanations. Given that some explanations provided by AI systems are really justifications and, worse, blatant disinformation, empowering users in this manner is way to assess trust and accountability.
Expectations are fine, but operationalizing HCXAI is better. One technique to embed HCXAI principles in AI systems is the concept of “seamful” design. Rather than the current vogue for frictionless interactions, a seamless experience, HCXAI promotes “showing the seams” or the gaps that reveal the inherent limitations, deficiencies, and assumptions. Transparency and disclosure replace the simplicity and ease of use that can mask underlying concerns.
Despite the apparent reasonableness of HCXAI, it continues to be more honoured in the breach. Could AI regulation or legislation could fix this? Regulating technology has a troubled history. It is often viewed as either “too late” or “too soon” and obligations are viewed as either “too little” or “too much.” The widely discussed “right to explanation” purported to be part of the 2016 European Union General Data Protection Regulation (GDPR) was toothless if it was even there at all. While the EU AI Act, the US AI Bill of Rights, and others talk about the importance of explanations, the acknowledgement is largely about only high risk applications and there are unclear accountability measures. Can regulation embed explainability? Apparently with too little and too late, the answer so far is no.
Unlike the numerous and well-funded XAI researchers, the HCXAI research community is small and striving for influence. However, explainability is now part of the six “grand challenges” of human-centered AI, despite being recognized as a “wicked problem,” perhaps “the wicked problem of artificial intelligence research.”
Latanya Sweeney, Director of the Public Interest Tech Lab at Harvard, notes that “technology designers are the new policymakers; we didn’t elect them, but their decisions determine the rules we live by.” With the release of ChatGPT and other generative AI systems, concerns about trust and accountability have become public debates. This is the essential groundswell that might propel HCXAI forward and make “where the humans are” a central feature of AI system design and a public policy priority.
See, Michael Ridley. (2024). Human‐centered explainable artificial intelligence: An Annual Review of Information Science and Technology (ARIST) paper. Journal of the American Society for Information Science, 1–23. https://doi.org/10.1002/asi.24889
Cite this article in APA as: Ridley, M. The explainability imperative. (2024, April 25). Information Matters, Vol. 4, Issue 4. https://informationmatters.org/2024/04/the-explainability-imperative/
Author
-
For many years, Michael Ridley was the Chief Librarian and Chief Information Officer (CIO) at the University of Guelph where he is now Librarian Emeritus. Ridley recently completed a PhD at the Faculty of Information and Media Studies, Western University ("Folk Theories, Recommender Systems, and Human Centered Explainable AI (HCXAI)". Prior to his appointment at Guelph, he held positions at the University of Waterloo (Associate University Librarian) and McMaster University (Head of Systems and Technical Services, Health Sciences Library). His professional career as a librarian began where it ended. Ridley's first appointment as an academic librarian was at Guelph where he served as a Reference Librarian, Catalogue Librarian, and Library Systems Analyst.
View all posts