GenAI Benefits and GenAI Burdens: Two Sides of the Same Coin?
GenAI Benefits and GenAI Burdens: Two Sides of the Same Coin?
Christopher Lueg
In the space of just three years since its release to the public, ChatGPT and similar products have managed to change how millions of people engage with their desktop, laptop, and mobile computers aka smart phones. This includes many people who probably never imagined they would use Generative Artificial Intelligence (GenAI) one day.
There are few digital domains left where people (sometimes referred to as end-users) aren’t using GenAI to explore new topics, to compose documents, to streamline workflows, or to visualize ideas. They engage with the actual GenAI algorithms only indirectly via web interfaces that are provided by GenAI-using digital platforms. Such platforms include, for example, the online trading platform eBay, the social media platform Facebook, and of course the public facing ChatGPT system itself.
—Users of GenAI applications need to understand that using GenAI can lead to potential rewards but also introduces potential risks—
Information systems researchers recently developed a conceptual framework to describe how GenAI tools have transformed how digital platforms operate. A defining characteristic of digital platforms is that they “facilitate interactions and value creation between multiple stakeholder groups through standardized interfaces and governance mechanisms.” The researchers defined four ways as to how GenAI is driving how platforms operate: intelligent automation, hyper-personalization, democratization, and collaborative innovation. The last two focus specifically on how people engage with digital platforms. Democratization is used in a participatory sense to denote, for example, that GenAI can help people express themselves in different ways, such as using terminology they are not accustomed to in a job application. Some social media platforms now offer GenAI assistance to reword posts such that they express specific sentiments like being funny, being serious, or business formal. Collaborative innovation denotes that people can use GenAI to interactively generate new ideas or compose documents, such as business reports or research papers.
Tangible outcomes of GenAI use like the aforementioned examples are considered positive outcomes even if there questions about longer term impact of GenAI (over)use. At the same time, there is mounting evidence of GenAI use leading to unexpected detrimental or even damaging outcomes. We focus on unexpected outcomes which means we do not discuss instances where GenAI tools are used deliberately to create say, fake footage pretending to depict events that did not actually take place or that did not include the people depicted (“deepfakes“).
Examples of detrimental or even damaging outcomes of GenAI use include Amazon distributed eBooks on mushroom foraging that contain potentially deadly advice and the beloved summer reading list from the Chicago Sun-Times that recommended books that don’t exist.
This is just the tip of the iceberg though. A recent study of fake citations found that GenAI tools “frequently generated fabricated or inaccurate bibliographic citations in research papers, with these errors becoming more common when the AI is prompted on less familiar or highly specialized topics.” The latter makes sense given that GenAI systems can be described as plausibility engines. Making up citations may seem like a minor issue only if one does not understand the purpose of citations. Citations are used to give credit but more often they are used to “[refer] to a source of information that supports a factual statement, proposition, argument, or assertion.” This domino effect means that fake citations demand questioning the validity of the statements that were meant to be backed by said citations.
The use of GenAI in legal proceedings causes similar challenges. High Court of Australia chief justice Stephen Gageler emphasized that the speed at which GenAI is being developed “could be outstripping people’s ability to ‘comprehend its potential risks and rewards.’” The context is Justice Gageler’s warning that it is unsustainable that judges have become “human filters” due to the number of made-up precedents being cited in court cases.
Users of GenAI applications need to understand that using GenAI can lead to potential rewards but also introduces potential risks. These potentials are two sides of the same coin. GenAI how-to guides typically include reminders to verify GenAI outputs. The sheer number of cases where results were clearly not checked raises the question as to why this is happening. One possible explanation is that people are accustomed to software that produces faulty output when given faulty input (e.g., faulty formulas in spreadsheets) but they are not accustomed to software that produces faulty outcomes (“hallucinations”) without showing any indications of faulty behavior. The infamous FDIV fault in the early Intel Pentium processor was arguably such a big deal because certain floating-point calculations were incorrect even when there was no indication of faulty processor behavior. It is also worth noting that search engines that produce mostly irrelevant results still provide results that do exist; search engines do not make up results like ChatGPT. This lack of experience might explain why a lawyer caught using fake GenAI produced citations in a brief for his clients was caught again using fake citations and quotations when opposing a motion for sanctions. After all, there was likely no indication that the GenAI used was producing faulty outputs.
A different explanation build on people’s limited understanding of the nature of risk. Risk perception is “the subjective judgment people make regarding the characteristics and severity of a risk.” Risk judgments are influenced by past experiences, cultural background, and cognitive biases. GenAI users not verifying the outputs may simply assume that the risk of getting false results is low and/or that the risk of getting caught is low.
Perhaps we need to look beyond the realm of software to identify ways to help people internalize that there are always two sides of the shiny GenAI coin, potential benefits and potential burdens. The benefits are what people are looking for when using GenAI. The burden is the need to make output verification part of any GenAI routine.
Comparing GenAI use to the real-world behavior of speeding is a good start because it helps illustrate some of the key issues. Drivers tend to assume that their speeding won’t cause accidents and that they won’t get caught speeding either. A key difference is that at the end of a trip, any damages would be fairly obvious, unlike GenAI introduced hazards. Speeding drivers tend to believe that they are in control even though their actual level of control is very limited. Driver’s education therefore tends to highlight unexpected events that would trip even the best of drivers.
Creating a range of sandbox GenAI experiences would allow novel and experienced GenAI users alike to learn firsthand about a range of GenAI-associated hazards. It would certainly have helped the aspiring PhD student that emailed this author when looking for a PhD advisor. Unfortunately, the author’s research papers that the student highlighted as “inspiring” don’t exist; the titles were made up by the GenAI that they likely used to compose their email. In this instance, using GenAI lead to nothing but a great teaching example.
Cite this article in APA as: Lueg, C. (2026, January 12). GenAI benefits and GenAI burdens: Two sides of the same coin? Information Matters. https://informationmatters.org/2025/12/genai-benefits-and-genai-burdens-two-sides-of-the-same-coin/
Author
-
Christopher Lueg is a professor in the School of Information Sciences at the University of Illinois Urbana-Champaign. Internationally recognized for his research in human computer interaction and information behavior, Lueg has a special interest in embodiment—the view that perception, action, and cognition are intrinsically linked—and what it means when designing for others. Prior to joining the faculty at Illinois, Lueg served as professor of medical informatics at the Bern University of Applied Sciences in Biel/Bienne, Switzerland. He spent almost twenty years as a professor in Australia teaching at the University of Technology, Sydney; Charles Darwin University; and the University of Tasmania, where he co-directed two of the university's research themes, Data, Knowledge and Decisions (DKD) and Creativity, Culture, Society (CCS).
View all posts