EducationFeatured

Library Workers in Times of Hype: AI Edition

Library Workers in Times of Hype: AI Edition

Matthew Noe, Ella Gibson, and Ruth Monnier

In the United States, libraries are consistently known for being a trusted place and library workers respected for their ability to locate trustworthy information. American society has given their trust to their public libraries and library workers based on lifetimes of being provided with credible, free-at-the-point-of-use sources, fact-checking assistance, and general goodwill. We cannot take this trust for granted or assume that the general American public will continue to trust our institution because that has been historically true. As library workers, our words, programs, and actions have weight; and not everything deserves equal promotion and support.

—As library workers, our words, programs, and actions have weight; and not everything deserves equal promotion and support—

If libraries and its workers are generally trusted by the American public, is the same true of generative AI and its companies? According to the Pew Research Center, “52% of Americans are more concerned than excited about AI in daily life” and “25% of K-12 teachers say AI does more harm than it benefits.” Americans are concerned for a variety of reasons, ranging from increased AI surveillance, to real or imagined job market impacts, to the slop or hallucinations output from generative AI tools (such as ChatGPT, Gemini, Copilot, etc.). AI slop is low quality and low effort content generated by generative AI tools to intentionally take the attention and conversation on this content versus relevant high-quality content. For example, the recent trend of individuals creating avatars in the style of Studio Ghibli animations. Additionally, AI slop created by co-workers to complete their tasks can also negatively impact library workers’ relationships with each other. In addition to low quality content, generative AI tools are designed to provide an answer even if the system is unsure what a true answer is, continue to produce false information, or, hallucinations. While the term has stuck, there continue to be calls to stop using it because it offers cover for the failures and potential harms of the technology. High profile examples of AI hallucinations include Make America Healthy Again report 2025, multiple incidents of Deloitte providing reports with nonexistence citations, and, of course, lawyers’ usage in court documents. Most generative AI models will hallucinate; some of the most common models do so over 30% or more in their outputs. Occasionally, AI slop and AI hallucinations will be used interchangeably by individuals.

Why do AI slop and AI hallucinations matter to the library community? Because presenting incorrect facts or skipping the step of verifying each output from generative AI tools can lead to library workers providing inaccurate information to patrons. Because each time a library worker provides factually incorrect or misleading low-quality information to a patron, we are failing in our mission to be a trusted public good. Each failed interaction risks the erosion of a patron’s perception of the library as a trustworthy institution to find credible information. This can go beyond presenting a hallucinated resource. What happens when library workers recommend tools to patrons that provide them false or incomplete information? Patrons often visit libraries looking for legal and medical advice, and library workers are often a point of contact in directing them to resources for help.

What happens when people don’t have a place to go to be seen, especially without having to spend money? Libraries act as one of the last remaining third spaces open to all for people to connect and socialize. Most libraries are open and welcome everyone in a community, unlike generative AI tools that charge monthly rates and/or profit off of their users’ interactions. There have been conversations, though, that generative AI tools or other AI tools could replace library workers to various degrees in service positions, which would limit person-to-person interactions. What happens when libraries don’t have engagement with patrons because the workers have been replaced by machines? Generative AI companies have pushed for machines to be “friends” or “romantic partners.” Some Americans have pushed back on these generative AI companies through graffiti of their advertisements and lawsuits when the “friendship” goes wrong.

key concepts of article word cloud
Human-created word cloud. All rights reserved to authors.

Just as we don’t place every book on display, we don’t need to promote every technology that comes along. Library workers can be AI literacy leaders without using, or encouraging the use of, generative AI tools. Being an AI literacy leader will look different for each library worker depending on their community, but could look like:

  • Educating patrons on how generative AI works compared to other AI tools;
  • Raising awareness about generative AI scams and deepfakes infiltrating all aspects of life, everyone is now a potential target, not just celebrities and the wealthy;
  • Informing the public of ways to verify the information they are consuming;
  • Advising patrons how to protect their personal data (in general and within various accounts);
  • Recognizing the harms and liabilities that generative AI tools can cause when implemented in internal or external processes;
  • Disclosing when the library uses generative AI content (programming, marketing, etc.);
  • Creating policies that clearly state expectations about library workers and patrons’ usage (or not) of generative AI tools.

Being a leader means standing up for the high quality content and materials that have historically been associated with libraries. Being an AI literacy leader for this moment means providing the education and skills training our communities need to identify flawed outputs, misinformation, and critically evaluate the technology they are being encouraged to use. While library workers typically do, and should continue to, keep abreast of new trends and technologies, that does not mean embracing the use of these tools nor does it require that we incorporate them into library offerings. Older library workers might embarrassingly remember the Second Life trend, but traditionally the field does exercise more caution when it comes to hyped up technological trends like not having workshops on NFTs. Library workers need to keep their community’s trust by being critical and deeply evaluating each technology and material that seeks to find a home in our institutions. Every type of leader needs to be critical of their tools and the situation they find themselves in. As the saying goes, only fools rush in.

Cite this article in APA as: Noe, M., Gibson, E., & Monnier, R. (2025, December 12). Library workers in times of hype: AI edition. Information Matters. https://informationmatters.org/2025/12/library-workers-in-times-of-hype-ai-edition/

Authors