How Will You Respond to the Unacceptable Costs of GenAI?
How Will You Respond to the Unacceptable Costs of GenAI?
Elisa Tattersall Wallin
There are few technological advancements that have been so hyped, discussed and quickly adopted as generative artificial intelligence (GenAI), with tools such as ChatGPT, Copilot, Gemini and Grok. With this swift emergence comes fears of being left behind and what this altered future will hold, which can lead to people jumping on the bandwagon without much thought. As an academic, one of the most common sentiments I come across these days is that we must let university students use GenAI or they will fall behind before they even enter the workforce. Furthermore, that as researchers we should embrace GenAI to make our work more efficient. While I recognise that machine learning indeed can be useful, I argue that GenAI comes at an unacceptable cost.
In an ongoing crisis of information, the last thing we need is more unreliable information. GenAI tools enable the creation of disinformation, a purposeful misleading with false information. It should also be well-known by now that GenAI repeatedly provides incorrect information, so-called hallucinations. Even becoming more prone to do so as the models get better. The way Large Language Models (LLM’s) learn to imitate human language is built on probability; what word is most likely to come next in a sentence? Not what is most factually correct, just what sounds right. As people use GenAI like search engines or encyclopaedias, they unsuspectingly get provided with false information. For information specialists, the spread of mis- and disinformation should be unacceptable. Furthermore, continuous use of GenAI negatively impacts people’s cognitive abilities, eroding their critical thinking. This combination creates a perilous loop.
—GenAI comes at an unacceptable cost—
Moreover, the negative impact GenAI has on the environment and climate is multilayered. There is for example the harmful extraction of rare earth elements for the creation of hardware such as chips, followed by the transportation of these. Then there are the AI-data centres whose extensive impact occurs both during training of the LLMs, and throughout the use. Most AI-data centres run on fossil fuels, leading to increased emissions of greenhouse gases which furthers climate change. As the AI-chatbots are designed to keep users continuously posing queries and refining the generated content, and AI-summaries are becoming the norm for search engines, an even higher rise in emissions is created. The data centres also contribute to air pollution in the local communities where they are situated, such as xAi which is run on unlicensed methane gas. Also, having a data centre nearby can lead to increased electricity costs for residential customers to offset the discounts that utility companies offer to the tech companies.
Furthermore, large quantities of water are used to train these LLMs and to cool down the data centres. This has a detrimental impact on local water supplies, ecosystems and public health. For example, inhabitants have had pre-existing health issues exacerbated or even had pregnancy miscarriages due to water deficiencies caused by nearby AI-data centres. Most data centres developed for AI today are established in communities which are already water scarce. As if this wasn’t enough, a new British report shows that biodiversity loss and the collapse of local ecosystems is a serious security issue. Simultaneously, the acceptable use policies of the major AI training models do not take issues related to the climate, ecosystems, biodiversity or animals in to consideration. In fact, the only issue up for consideration seems to be that people should be protected from encountering AI-generated animal abuse. Meanwhile, the very real consequences GenAI infrastructures have on the biosphere does not warrant deliberation by the Big Tech companies.
There is also a myriad of ethical issues with GenAI and Big Tech. Recently, a vast amount of sexual deepfakes have been created on X, where users of the platform have employed the AI-tool Grok to undress and sexualise images of real women and children. Additionally, bias within the LLM’s also causes further harm to women, such as in AI-assisted summaries of social care cases, where women’s health issues and needs were downplayed by the AI tool while men’s similar needs were summarised as significant. This could lead to fewer women receiving the care they need. Moreover, the content moderators working for Big Tech to remove harmful content from the models often do so under poor working conditions, with some diagnosed with PTSD due to their jobs.
Another well-known ethical issue is that these AI-models are trained on copyrighted data, including works by authors, artists, and researchers. The Big Tech companies have not asked for consent or given compensation to the original creators, instead claiming that this is fair use. Intriguingly, the European parliament just pronounced that renumeration for copyrighted works used for AI training data is a must. Big Tech also continuously collects the user interactions with the AI-chatbots. Considering how AI is used as therapists, friends or even romantic partners, this illustrates the type of sensitive personal information being shared with ChatGPT and the like. As the Big Tech companies are running out of data they are also turning to shadier methods to be able to continue training their models. In 2025, Meta started mining their users’ phone camera roll and uploading the images to train their AI model, without transparency and consent. From a democracy standpoint, we must also consider that some of the largest donors to MAGA inc. are tech leaders, like the CEO of OpenAI. This influence shows as the Trump administration’s AI policy aims for global technological dominance and they use GenAI for writing governmental regulations, creating AI generated images and videos of Trump and beyond.
We should remember that those who profit the most from our growing reliance of GenAI are the tech companies themselves. Meanwhile, the people who are the most excited about AI are the ones who understand it the least. Taking in to consideration GenAI’s role in the spread of disinformation, the complex damages caused to people and the planet along with the proven negative effect to cognitive skills among users, I argue for critical perspectives, and ideally, critical refusal of GenAI.
Cite this article in APA as: Wallin, E. T. (2026, February 10). How will you respond to the unacceptable costs of GenAI? Information Matters. https://informationmatters.org/2026/02/how-will-you-respond-to-the-unacceptable-costs-of-genai/
Author
-
Elisa Tattersall Wallin, PhD, is a senior lecturer at the Swedish School of Library and Information Science (SSLIS), University of Borås, Sweden. Her research relates to information practices and reading practices in digital environments, critical platform studies, and sustainability.
She has studied environmental information on podcasts and social media as part of the Mistra Environmental Communication programme. In 2023, her PhD thesis on audiobooks was named runner up by the iSchools doctoral dissertation award, recognising outstanding work in the information science field.
View all posts