Lessons in AI Literacy and Explainability from Lucy and Ricky
Lessons in AI Literacy and Explainability from Lucy and Ricky
Michael Ridley
In the classic 1950s TV sitcom I Love Lucy, when Lucy did something outrageous her husband Ricky would exclaim “Lucy, you’ve got some explainin’ to do!” Typically, Lucy would come up with some sort of implausible response. Hilarity ensued.
Well, it’s not the 1950s anymore but 70+ years later Large Language Models (LLMs) and AI chatbots (e.g., ChatGPT, Gemini) are doing outrageous things (hallucinations, fabrications, misinformation, and worse) and the explanations, if there are any, are just as implausible. And it isn’t funny.
—Explanations Matter. Why Don't We Get Them?—
The enduring memes about AI chatbots are that they are black boxes and random parrots. Opaque mimics. And yet we are using them for nearly all aspects of our lives. We need them to be trustworthy, and trust is earned.
I’m with Ricky, there’s some explaining that needed.
Explanations matter. Dr. Tania Lombrozo is a cognitive scientist who researches explanations. She notes that explanations “are more than a human preoccupation–they are central to our sense of understanding, and the currency in which we exchange beliefs.” We understand the world and ourselves because we ask questions and receive explanations.
We should always ask AI chatbots Why?, Why not?, and Why not this instead? (so called “counterfactual” questions). The explanations we get back should be actionable (we can do something with them; make decisions) and contestable (we can challenge them; seek further clarification).
If explanations do matter, how will we get better explanations from AI chatbots?
We could (and should) ask OpenAI, Meta, Google, et al. to build explainability into their systems. However, one of the other memes about AI and LLMs is they are so complex we don’t really know how they work. True … to a point. We actually know a lot about how they work and especially how they should work. Over to you tech bros.
While we’re waiting, a few things in the meantime. First the personal, then the political.
Become AI literate. Our lives are busy. We want to do things (prompt ChatGPT for information) and move on to the next priority. But being AI literate requires a new perspective: reflection over acquiescence. Being reflective about what the AI chatbot is telling us rather than uncritically accepting it puts you in a position of control. It gives you the ability to ask those essential questions and to demand the AI chatbots do better.
Become an explainability activist. In most jurisdictions (country, province, state, even municipality), there are laws, regulations, bylaws, and guidelines to protect consumers against poor, dangerous, and exploitive products. AI chatbots are consumer products. We should expect existing consumer protections to respond to our needs. Advocating for AI chatbot explainability with those in power is one way to make these systems more effective, safe, and trustworthy.
Being AI literate means looking out for yourself (critical thinking) and being an explainability activist means looking out for your community (political action).
Latanya Sweeney, Director of the Public Interest Tech Lab at Harvard, notes that “technology designers are the new policymakers; we didn’t elect them, but their decisions determine the rules we live by.” It is beyond time we rebalance this relationship and put people at the center of AI regulations, safety, and trust.
There is definitely “some explainin’ to do.”
Cite this article in APA as: Ridley, M. (2025, November 18). Lessons in AI literacy and explainability from Lucy and Ricky. Information Matters. https://informationmatters.org/2025/11/lessons-in-ai-literacy-and-explainability-from-lucy-and-ricky/
Author
-
For many years, Michael Ridley was the Chief Librarian and Chief Information Officer (CIO) at the University of Guelph where he is now Librarian Emeritus. Ridley recently completed a PhD at the Faculty of Information and Media Studies, Western University ("Folk Theories, Recommender Systems, and Human Centered Explainable AI (HCXAI)". Prior to his appointment at Guelph, he held positions at the University of Waterloo (Associate University Librarian) and McMaster University (Head of Systems and Technical Services, Health Sciences Library). His professional career as a librarian began where it ended. Ridley's first appointment as an academic librarian was at Guelph where he served as a Reference Librarian, Catalogue Librarian, and Library Systems Analyst.
View all posts