Opinion

When Search Starts Answering: What Libraries Need to Explain About AI

When Search Starts Answering: What Libraries Need to Explain About AI

Elaine Kong

Library search tools aren’t just returning results anymore. They now summarize, suggest, and sometimes even interpret information for us. The real challenge is no longer access alone. It is helping people understand how those tidy answers are shaped, limited, and presented as trustworthy. Search tools once helped users find materials. Now they are beginning to offer answer-like interpretations of what a collection appears to mean. Sure, it feels helpful. But it also quietly changes how people decide what is credible, what is complete, and what feels neutral. Before libraries ask people to trust AI-mediated discovery, they need to explain what the system is doing on the user’s behalf. While this shift is visible across GLAM institutions, libraries offer a particularly clear case because discovery systems are often the first place users encounter this kind of AI-driven interpretation.

For a long time, library discovery followed a familiar model: users entered a query, reviewed a list of results, and made their own judgments about relevance and credibility. That process was never perfectly neutral, but it still left interpretation largely in the user’s hands. Generative AI changes that balance. Instead of simply retrieving records, newer systems such as Primo Research Assistant can summarize articles, suggest themes, translate questions into search strategies, and produce responses that feel closer to answers than result lists. That may help users get started more quickly, especially when they feel overwhelmed or do not know how to search. But it also means the interface is no longer just a pathway into the collection. It is becoming part of how the collection is understood.

—The real risk is not that the AI will be obviously wrong. It is that it can sound convincingly half-right, so the gaps become much harder to notice—

That distinction matters because answer-like outputs can hide the choices behind them. A user may not know which materials were included, which were excluded, how the system ranked relevance, or how it translated a complex question into something searchable. A smooth response can create the impression that the library has delivered a complete and balanced account, when in reality the output reflects a chain of technical and institutional decisions. The real risk is not that the AI will be obviously wrong. It is that it can sound convincingly half-right, so the gaps become much harder to notice. What looks like a clean, neutral summary is often built on uneven coverage, inconsistent metadata, hidden ranking rules, or language biases the reader never notices. In that sense, AI-mediated discovery does not simply save time. It can also compress uncertainty, making a partial view feel settled.

This is especially important for libraries as public spaces for inquiry. They are trusted not because they offer perfect information, but because they help people make informed judgments. When discovery systems begin to interpret on behalf of users, that trust can shift in subtle ways. Users may start to treat the system’s wording as institutional knowledge rather than as a provisional machine-generated response. They may assume that what appears first is what matters most. They may not realize that some materials, communities, or perspectives are harder for the system to retrieve, summarize, or represent well. The problem is not only that AI can be wrong. It is that it can sound complete before users have enough context to question it.

That is why the most urgent task for libraries is not simply adopting AI, but explaining it. If a discovery tool summarizes information, libraries should help users understand where that summary comes from and what it leaves out. If a system searches across only part of a collection, that limit should be visible rather than buried. If the tool translates a user’s question, re-ranks results, or presents a synthesized overview, those steps should be clear enough for users to recognize that they are encountering mediation, not pure access. This kind of explanation does not require libraries to reject innovation. It requires them to remain honest about what innovation changes. Convenience should not come at the cost of interpretive transparency.

There is also a professional question here. As AI becomes embedded in discovery, the work of library staff may shift from helping people search to helping people understand systems that behave more like assistants. That means teaching users how to question an answer-like response, when to return to source materials, and why fluency is not the same as authority. It also means asking harder institutional questions: What assumptions are built into these tools? Which users are they designed for? Whose language practices and research habits are centered? What kinds of evidence are easier to summarize, and which ones are more likely to disappear behind the interface? These are not just technical questions. They are library questions because they concern fairness, explanation, and the conditions under which trust is earned.

As discovery becomes more conversational, library instruction and library ethics need to be much more direct about mediation. Users do not just need help finding sources anymore. They also need help recognizing when a system has already begun to frame the meaning of those sources for them. The future of AI in libraries will not depend on how polished the answers sound. It will depend on whether libraries can make those answers transparent, clearly limited, and genuinely accountable. That is the real challenge of AI-mediated discovery. When discovery becomes interpretation, libraries cannot assume that trust will transfer automatically from the collection to the interface. They have to show users why a system should be trusted, where it should be questioned, and what remains the user’s own responsibility to judge.

Cite this article in APA as: Kong, E. (2026, April 24). When search starts answering: What libraries need to explain about AI. Information Matters. https://informationmatters.org/2026/04/when-search-starts-answering-what-libraries-need-to-explain-about-ai/

Author

  • Elaine Kong

    Elaine Kong is a PhD student in Library and Information Science at University of Pittsburgh. Her research explores how cancer survivors navigate health information, credibility, and care across clinical and social media environments.

    View all posts

Elaine Kong

Elaine Kong is a PhD student in Library and Information Science at University of Pittsburgh. Her research explores how cancer survivors navigate health information, credibility, and care across clinical and social media environments.