Explanations

Education

Lessons in AI Literacy and Explainability from Lucy and Ricky

In the classic 1950s TV sitcom I Love Lucy, when Lucy did something outrageous her husband Ricky would exclaim “Lucy, you’ve got some explainin’ to do!” Typically, Lucy would come up with some sort of implausible response. Hilarity ensued. Well, it’s not the 1950s anymore but 70+ years later Large Language Models (LLMs) and AI chatbots (e.g., ChatGPT, Gemini) are doing outrageous things (hallucinations, fabrications, misinformation, and worse) and the explanations, if there are any, are just as implausible. And it isn’t funny.

Read More
Translation

The Explainability Imperative

If artificial intelligence is so smart, why can’t it explain itself? This somewhat flippant question has preoccupied AI developers and researchers from the earliest days. Over 50 years later, the question is still relevant and increasingly urgent. When generative AI can hallucinate with impunity, there is a problem. Explainability is part of the answer.

Read More