The Inward Turn: From AI Outputs to AI Discourse
The Inward Turn: From AI Outputs to AI Discourse
Troy Davis
I didn’t set out to include the study of metaphor, anthropomorphism and explanation or the application of discourse analyses in my own personal information literacy toolkit, but I kept noticing a pattern. When I read corporate blog posts announcing new AI capabilities, I noticed that the language was often rich with an interesting framing vocabulary of agency, intentions and motivations: systems that “understand” context, models that “learn” from feedback, assistants that “know” what you need.
But when I read the technical articles about how these systems actually function (probability distributions, token prediction, gradient descent) the anthropomorphism, while not completely gone, did tell a different story. These tools were highly probabilistic, error prone, brittle. Two vocabularies for the same technology. That gap started to feel like a classic information literacy problem.
Librarians have spent decades teaching students to evaluate sources, trace authority, and recognize how information is constructed. We’ve built sophisticated frameworks for this work. The advent of generative AI has obviously intensified these efforts. We’ve provided frameworks for evaluating AI-generated content: Is this output accurate? What biases might it contain? But as we’ve rushed to develop “AI literacy” programming, I wonder if we’ve overlooked a rich space where these same frameworks can be applied: the language we use to describe these systems in our own conversations, instructional materials, task forces, and workshops.
—"AI literacy" as a phrase may be part of the problem—
I recently came across a blog post from Center on Privacy & Technology at Georgetown Law that squarely confronted this question. They realized that as a center focused on privacy and law in the digital era, writing constantly about AI, their own language was undermining their critical mission. What’s interesting: their response wasn’t “learn some tools.” It was four principles for disciplinary self-awareness:
- Be specific about what the technology actually does
- Acknowledge when corporate opacity prevents full understanding
- Name the corporations responsible
- Attribute agency to humans, never to the technology itself
That last principle strikes me as the crucial one, especially for library users. Yes, anthropomorphic language is imprecise, but it also shapes expectations, calibrates trust, and obscures accountability. If an AI tool “made a mistake,” who’s responsible? When users routinely hear phrases like “the model knows,” “it doesn’t want to answer,” or “it prefers coherent completions,” this framing does some heavy lifting. It has real consequences for how people evaluate and emotionally relate to these technologies.
The Georgetown intervention might point to a practice that academic libraries could adopt. As they pivoted: We write about privacy and law constantly, and we realized our own language was undermining our critical mission. We could try something similar: We’re librarians, we teach information literacy about AI. What if we applied that same critical apparatus to the language we use to describe AI? Could uncritical discourse upstream affect everything else downstream?
I’ve come to suspect that “AI literacy” as a phrase may be part of the problem. It naturalizes AI as a coherent category requiring specialized knowledge. This framing could inadvertently send professionals looking for new competencies when they already possess the critical skills needed for this work. In the Georgetown example, there’s no mention of becoming an expert in anything. Just noticing that words matter.
That’s the good news. We don’t need a new literacy. We have the ones we already practice. Consider the ACRL Framework for Information Literacy. “Authority Is Constructed and Contextual” describes exactly what happens when corporate communications position AI as “knowing” or “understanding.” Authority is being linguistically constructed for algorithmic systems. “Information Has Value” illuminates why: anthropomorphic framing has economic value, building trust and driving adoption. “Information Creation as a Process” reminds us that press releases announcing AI capabilities are themselves information products with authors, interests, and purposes.
For the past year, I’ve been experimenting with what this might look like in practice. The project, which I call Discourse Depot, uses structured prompts and data schemas to systematically analyze how AI is described in corporate communications, news articles, research articles, and policy documents. (Yes, I’m using an LLM to deconstruct language about LLMs). The goal is to make visible the metaphorical moves, surface linguistic choices and audit explanations that construct AI as an autonomous agent or a “mind” rather than what it actually is: an impressive (but mechanistic) technology and a corporate product with human designers, investors, and beneficiaries.
From Discourse Depot: Example reframing from an audit of Google’s AI and the Future of Learning
| Original Anthropomorphic Frame | Mechanistic Reframing | Technical Reality Check | Human Agency Restoration |
| “AI can act as a partner for conversation, explaining concepts, untangling complex problems.” | The interface allows users to query the model iteratively, prompting it to generate summaries or simplifications of complex text inputs | The model does not ‘act as a partner’ or ‘untangle’ problems; it processes user inputs as context windows and generates text that statistically correlates with ‘explanation’ patterns in its training data. | Google developed this interface to simulate conversational turn-taking, encouraging users to provide more data and spend more time on the platform. |
These formulations aren’t perfect, but they emphasize human agency and institutional accountability, countering the framing of AI as autonomous. But the intellectual work was in the translation: taking the critical habits information literacy had already given me and applying them to a new domain.
The bad news: the technology behind generative AI tools will always deal in probabilities not facts. I’ve had to make peace with this. And as a community, we’ll need to reckon with things like whether placing these systems between users and sources is worth the risk. This will require attention. But attention is what we do. Critical evaluation is what we teach.
What excites me isn’t that librarians can find a way to lead on AI literacy since I recognize that they already possess the tools and critical energy for that leadership. We’ve been teaching source evaluation, authority construction, and information creation processes for decades. The only thing new is the object of analysis.
Cite this article in APA as: Davis, T. (2026, January 7). The inward turn: From AI outputs to AI discourse. Information Matters. https://informationmatters.org/2025/12/the-inward-turn-from-ai-outputs-to-ai-discourse/