AI—The Familiar Promise of the One Ring
AI—The Familiar Promise of the One Ring
Abhinav Choudhry
Generative AI (GenAI) has raised quite a stir in the past three years, pun intended. Trained, very arguably, on the ‘entire internet’, these models not only effortlessly breached the Turing Test barrier but today, laypersons are even using them as appeals to authority: ‘@Grok, is this true?’, or journalists quoting AI as ground truth for fact checks. The machine heuristic is still at play: AI is assumed to be unbiased, objective and rational. The hype around AI is palpable too: it being the panacea to the world’s problems and sparking debates on future need for universal basic income, even as hungry data centres gobble up gargantuan quantities of RAM and chips. However, trillions of dollars need to take stock first.
—GenAI inherently has trade-offs: truthfulness vs. task adherence, diversity vs. truthfulness, safety alignment vs. performance—
The GenAI Uncertainty Principle
Generative AI output is fundamentally stochastic. The way attention in large language models (LLMs) works also means that hallucinations are not a bug but an intrinsic feature: AI output is simply unable to verify the truth of its statements prior to making them. It is goal seeking and aims to get a solution fast as that is also commercially more feasible. More accurate generative AI models can be made but these need to be expert systems and constrained. In that way, they are closer to classical AI: machine learning, constrained expert systems and symbolic AI. Throwing more data at the problem, as large technology firms making foundation models are doing is going to increase the probability of hallucinations for specific problems.
Reinforcement learning with human feedback (RLHF) can theoretically improve this but the way it is practised is likely to only improve the style and improve fine-tuning without actually impacting accuracy. A 700-word output by AI with a couple of critical errors can be punished while another with no errors can be rewarded, but the former could be extremely good with everything but the factual errors while the latter could be generic but without errors. A quality output should strongly adhere to the criteria for task success while being stylistically strong and factually accurate. The sledgehammer of RLHF hopes that enough human interactions will teach the AI eventually to create the perfect 700-word response. This is wishful thinking if you really do the math of possible error probabilities in imprecise natural language. Process-based Supervision can improve this, especially for more deterministic systems such as math or smaller proprietary systems, but this does not practically scale to everyday tasks. My point is that GenAI inherently has trade-offs: truthfulness vs. task adherence, diversity vs. truthfulness, safety alignment vs. performance: akin to the Heisenberg Uncertainty Principle, the more precisely we will determine one, the more we lose sight of the other.
Data Cannibalisation
Contrary to public perception, foundation LLMs were trained on only a fraction (~5%) of available data on the world wide web and excludes both the deep web and the dark web. This access to the surface web automatically introduces selection bias in terms of what is public and what is private on the internet. However, over half of internet traffic is made up of bots with 37% being malicious, and over half of all articles are written by AI. The latter statistic does not even account for the fact that only a fraction of writers today are not using GenAI to write at least parts of their work. So, foundation models being trained on current internet data are increasingly learning from synthetically generated data.
Secondly, a large part of the discourse has moved away from public forums such as Stack Overflow to private conversations between individual users and LLMs. The decline of forums means that AI has little fresh input from experts; it now relies increasingly on the knowledge it gains from private interactions. Indeed, this is good for AI companies as it means that they exclusively gain insight from expert interactions, especially as technology companies integrate AI into workflows. However, this presumes that the problems are solved and the expert solution makes it to the training. This presumption is unwarranted.
Changing Workflows
AI usage converts the user from a creator to a manager. This ostensibly quickens the turnaround time but given the error-prone nature of AI, more errors are likely to seep through. Editing is rarely as fun as writing and the new workflows are far more stressful to workers due to the level of meticulousness demanded for proofreading vast volumes of AI work. It also may not save workers time. AI will learn what’s acceptable versus unacceptable from new interactions but not novel solutions. Experts are passionate plus privacy sensitive and very much could use open-source LLMs privately expand their capabilities for personal use without revealing their secrets to third parties. More likely, AI is going to lose access to innovative problem solving. When an AI causes a problem and the human solves it but does not share the solution with AI or with the internet, the AI does not learn. This is perhaps why Microsoft is so keen on integrating Copilot everywhere and screenshotting interactions. AI is good at brute force solutions or pattern matching with existing training data, but it is not innovative or creative; the age and bias of its training data will show with time. As the world changes and encounters new problems, humans will creatively respond but will these make it to the LLM training data? Even if it does, given the preponderance of regurgitated content, these human sparks of creativity and brilliance will be drowned in volume and rarely make it to the final tokens output by the LLM. Moreover, humans raised on AI will fundamentally think within the constraints of the AI workflow.
AI is likely to accelerate the trend of humans automating away creative work and industriousness, something that has already occurred in the music industry and cinematography over the past two decades. It, however, will not replace human ingenuity but it risks creating a generation of humans too dependent on it to even realise the capabilities without.
Cite this article in APA as: Choudhry, A. (2026, February 26). AI—The Familiar Promise of the One Ring. Information Matters. https://informationmatters.org/2026/02/aithe-familiar-promise-of-the-one-ring/
Author
-
Doctoral candidate in Information Sciences, University of Illinois Urbana-Champaign; Master of Public Administration (Environmental Policy), Cornell University; Master of Finance & Control, Delhi University; Bachelors in Engineering (Computer Science), Rajiv Gandhi Technological University; Certificate in Environmental Finance and Impact Investing, Cornell University. Previously Citibank sales manager, Mortgages; Tata-Cornell Institute Research Associate; Intern in Finance at EDF and Nestle.
I am a research professional aiming at improving the quality of human life through technological innovation by combining multidisciplinary experience in computer science, financial services, sustainable development, food, and public health. I have research publications on human-computer interaction, social media, Generative AI, and food policy. I am currently building technology for older adults: an AI-powered application for older adults' physical activity, and a gamified simulation for imparting digital financial literacy (Awarded Purdue Institute for Information Literacy Award 2024-26).
View all posts Doctoral candidate, University of Illinois Urbana-Champaign