Who Gets to Train the Machine? AI, Race, and Storytelling in Library Makerspaces
Who Gets to Train the Machine? AI, Race, and Storytelling in Library Makerspaces
Tova Harris
The first time I saw AI transcription tools used in archival work, I was thrilled. A patron had just recorded an oral history about migrating to Long Island in the 1970s. What would have taken hours to transcribe by hand appeared in minutes. The technology felt like magic. But when we read through the transcript together, the cracks showed. Names were misspelled. Dialect was flattened. Cultural references were misheard. The story was there, but something essential had been smoothed out.
That moment changed how I think about artificial intelligence in libraries.
Across galleries, libraries, archives, and museums (GLAM), generative AI is quickly becoming part of everyday workflows. It drafts metadata, suggests subject tags, summarizes documents, translates text, and powers conversational search tools. In theory, these tools promise efficiency and expanded access. In practice, they raise deeper questions: Who trains these systems? Whose language patterns do they privilege? And who is trusted to implement, critique, and correct them?
These questions feel especially urgent inside library makerspaces.
—Who trains these systems? Whose language patterns do they privilege? And who is trusted to implement, critique, and correct them?—
Makerspaces are often described as hubs of innovation; rooms filled with 3D printers, laser cutters, sewing machines, robotics kits, and recording equipment. They emerged in public libraries in the early 2010s as part of the broader Maker Movement, promising to democratize access to technology. But over time, many of these spaces adopted a narrow definition of “making,” one centered on engineering, productivity, and technical mastery. The dominant image of the maker has often been White, male, and STEM-focused.
That history matters as we begin integrating AI tools into these environments.
If makerspaces already reflect uneven distributions of power – around race, gender, and technical authority, then introducing generative AI without reflection risks scaling those inequities. AI does not enter a neutral space. It enters a space shaped by existing assumptions about who is competent, who belongs, and who is seen as the expert.
As a Multiracial woman working in library makerspaces, I have experienced how technical authority is unevenly granted. Despite my training, I have been asked to defer to male colleagues or to White staff members when answering equipment questions. I have been asked where I learned my skills, whether I was “filling in,” or whether I really understood the machinery. These moments are not isolated. They reflect broader cultural narratives about who “looks like” a technologist.
Now we are adding AI into this equation.
In a makerspace, generative tools support storytelling and archival work in powerful ways. Patrons scan family photographs, and AI systems generate draft descriptions. Community members record oral histories, and transcription software produces a text version almost instantly. Local newsletters can be translated into multiple languages with a few prompts. For small libraries with limited staffing, this assistance can be transformative. It lowers barriers to participation and makes community collections more discoverable.
But AI-generated outputs are not neutral. A system might misgender a Black woman as a man, or generate dehumanizing descriptions that reflect documented failures of AI systems to accurately recognize Black faces. A translation might technically convert words while stripping away tone or cultural meaning. Metadata suggestions might default to broad categories that obscure specific identities.
Without human intervention, these subtle distortions accumulate.
This is where makerspaces can become laboratories for human-centered AI practice. Rather than treating AI output as final, we treat it as a starting point. Patrons sit with librarians to review transcripts and revise descriptions. We ask: What did the system miss? What feels inaccurate? What language would you prefer? This process transforms AI from authority into collaborator.
It also reframes the librarian’s role.
If AI can generate a draft record, our expertise shifts toward mediation and stewardship. We evaluate tools before adopting them. We explain their limitations. We design workflows that require human review. We document when AI has been used and remain transparent about its role. We help patrons understand that generative systems reflect the data they were trained on; data that often centers dominant cultures and languages.
Most importantly, we ask who is included in decision-making. Who decides which AI tools to purchase? Who is trained to use them? Who is invited to critique them? If the same racial and gender hierarchies that shape traditional makerspaces shape AI implementation, then generative systems may quietly reinforce exclusion under the banner of innovation.
The irony is striking. Makerspaces were introduced into libraries as symbols of democratization and empowerment. Yet without intentional design, they can reproduce the very inequities libraries aim to dismantle. AI magnifies this tension. It can expand access to collections through multilingual translation and conversational discovery. It can help surface under-described materials. But it can also encode bias at scale.
Human-centered AI adoption requires more than technical literacy. It requires cultural awareness, equity-focused training, and structural reflection. It means recognizing that generative tools are not simply neutral utilities but participants in knowledge production. Every metadata field, every automated description, every translated paragraph shapes how history is understood.
In our makerspace, I have begun to see AI not as a replacement for librarianship, but as a mirror. It reflects the values embedded in the spaces where it is deployed. When we approach it uncritically, it amplifies existing hierarchies. When we approach it collaboratively, it can support community authorship.
The future of librarianship may not lie in competing with machines at tasks they now perform efficiently. It may lie in guiding how those machines are used. In makerspaces, that guidance happens at the level of everyday interaction, sitting beside a patron, revising a transcript, rewriting a description, asking whose voice is centered.
If we are not careful, we risk automating Whiteness in our collections. But if we build AI practices grounded in transparency, collaboration, and equity, we can instead use these tools to widen participation in cultural memory. The question is not whether AI belongs in library makerspaces. It is whether we are willing to reshape our spaces, and ourselves, so that the machine learns from a more just and inclusive archive.
Cite this article in APA as: Harris, T. (2026, March 11). Who gets to train the machine? AI, race, and storytelling in library makerspaces. Information Matters. https://informationmatters.org/2026/03/who-gets-to-train-the-machine-ai-race-and-storytelling-in-library-makerspaces/