EducationFeatured

Confidence Without Comprehension: Why AI Literacy Needs a Reset

Confidence Without Comprehension: Why AI Literacy Needs a Reset

Jenna Hillhouse

A graduate student joyously submits a research paper, having completed it in a fraction of the usual time. A program manager skims over a concise, AI-generated literature summary just before a funding meeting. A librarian fields yet another question about which prompt will “get the best answer.” In each of these cases, generative AI seems to be doing exactly what it promises; accelerating search, delivering polished outputs on demand, and reducing the struggle of searching, evaluating, and making sense of information. However, beneath the efficiency lies a risk. When AI tools collapse complex search processes into seamless responses, they can obscure uncertainty, mask gaps in understanding, and smooth over meaningful distinctions of meaning, relevance, and confidence. Users may feel informed without ever confronting the limits of their knowledge or the assumptions guiding how information is interpreted. The challenge for libraries is not just teaching people how to use AI tools, but how to think with them without surrendering judgement.

—The very features that make AI tools appealing, such as speed, coherence, or confidence, also make them difficult to question—

Much of today’s AI literacy instruction focuses on the mechanics: how to prompt effectively, how to cite AI outputs, how to avoid plagiarism, and how to choose among tools. These lessons are indeed useful, but insufficient. These tool-centric approaches focus on technique over cognition, which reinforces a checklist mentality of–follow the steps, get an answer, and move on. Generative AI tools produce fluent, authoritative-sounding responses that can blur the line between retrieval, synthesis, and interpretation, presenting the multi-step activity of information search as a single, finished answer. The very features that make AI tools appealing, such as speed, coherence, or confidence, also make them difficult to question. Polished, well-structured responses invite acceptance and reduce the impulse to slow down, verify sources, or consider what may be missing or oversimplified. If AI literacy stops at “how to use the tool,” libraries may unintentionally reinforce this notion of confidence without comprehension. What is needed instead is instruction that focuses on uncertainty, interpretation, and responsibility; skills that librarians have long taught, but must now adapt to a new kind of information partner.

In high-reliability contexts such as healthcare, national security, or aviation, working with information is never just about access, it’s about judgement under uncertainty. Decision-makers must continually ask what they don’t know, what assumptions they’re making, and what happens if the information is wrong. These environments emphasize information practices that are directly relevant to AI literacy–triangulating multiple sources, examining inconsistencies, recognizing when information seems complete but is not, and understanding how prior experiences shape interpretation. Errors are not only the result of missing information alone, as they can emerge from misplaced confidence, cognitive shortcuts, and over-reliance on seemingly authoritative representations. Libraries may not operate in life-or-death contexts, but the habits that are cultivated through library instruction shape how students, researchers, and professionals engage information elsewhere. AI literacy that draws on high-stakes information practices can reframe AI tools as collaborators whose outputs require scrutiny.

Instead of treating AI as an extension of search skills, libraries can position it as a form of sensemaking, emphasizing how individuals construct meaning from incomplete, contextual, and sometimes conflicting information. In this framing, AI systems become powerful but fallible participants in information practices. Some instructional shifts that can support this reframing of AI literacy. First, instruction can move from an emphasis on “how to prompt” toward recognizing when to scrutinize AI-generated outputs, encouraging learners to identify situations where responses may be misleading, overly generalized, or confidently wrong. Second, rather than focusing on the final answer alone, instruction can make reasoning visible by asking learners to document how AI outputs were evaluated, challenged, or revised, making judgement explicit rather than implicit. Finally, instruction can shift attention from outputs to skepticism by asking learners to lay out what an AI system leaves out; which questions remain unanswered and how consulting alternative sources may reshape understanding. These shifts place human interpretation, accountability, and reflection back at the center of information practices, reminding learners that responsibility for understanding cannot be delegated to automated systems.

Libraries are uniquely positioned to lead this reframing, as they already sit at the intersection of information technologies, instruction, and critical thinking about knowledge and evidence. Practical next steps would be to include integrating AI tools into instruction as intentionally imperfect research assistance and to use the outputs as material for critique rather than endpoints. Instructional activities might ask students to compare AI-generated summaries with primary sources, map how different prompts shape responses, or to identify moments where AI creates false coherence. Librarians can also collaborate with faculty to design assignments that reward process transparency to show how conclusions were reached rather than conclusions that appear authoritative but are weakly grounded. Most importantly, libraries can model an ongoing practice of questioning answers, acknowledging limitations, and reshaping understanding. By openly discussing what AI can and cannot do, and by treating uncertainty as a quality rather than a failure of research, librarians reaffirm their role as stewards of thoughtful inquiry.

As generative AI becomes increasingly embedded in everyday information practices, an important question facing libraries is no longer whether or not AI will be used, but how judgement will be preserved. AI literacy is not about mastering the systems, and is instead about focusing on developing critical judgement and thinking. By reframing AI literacy around sensemaking, uncertainty, and responsibility, libraries can help to ensure that immediacy of information does not replace comprehension.

Cite this article in APA as: Hillhouse, J. (2025, December 19). Confidence without comprehension: Why AI literacy needs a reset. Information Matters. https://informationmatters.org/2025/12/confidence-without-comprehension-why-ai-literacy-needs-a-reset/

Author

Jenna Hillhouse

https://orcid.org/0009-0004-7523-4909