Original

AI as Co-Author of Error? Disinformation, Verification, and the Fragility of Knowledge Work

AI as Co-Author of Error? Disinformation, Verification, and the Fragility of Knowledge Work

Joseph Ryann Jalagat

The rise of artificial intelligence (AI) has revolutionized how we treat knowledge production. At its core, AI paves the way to the accessibility of information shared in today’s modern world. AI has significantly influenced how we look into every bit of information circulating online. To an extent, AI is treated by some as not just a tool, but an everyday companion whenever they make decisions, write and share posts online, and even reflect on personal or professional concerns; thus, making it a personal lifeline when time calls for it. However, despite AI’s leverage on knowledge production, management, and storage, this informational revolution introduces a paradox. The expansion of information availability also presents risks of destabilizing knowledge trust. The root of this all leads us to AI hallucination, which produces believable and sophisticated results but is actually contrived and fictitious. It is in this sense that AI not only serves as an aid, but as a co-author of error. 

—The expansion of information availability also presents risks of destabilizing knowledge trust—

At large, AI hallucination is not just an isolated case of digital glitch. In fact, research shows that these hallucinations occur at a staggering rate of 20% to 60%. Aside from this, AI presents the issue of biases, as the generative backend of large language models is rooted from historical legacies, societal imbalances, and algorithmic intricacies. These instances show how AI can be a manifestation of structural imbalance. Consequently, looking at how generative AI can easily make human-made content, misinformation may thrive in the guise of authenticity and reality. This makes falsity a bigger threat to us, as informational infrastructure may seamlessly fit in knowledge production. 

Given AI’s structural bias and misinformation tendencies, it carries a reputation to both maintain and protect informational authority. We tend to trust machines for their notoriety for being operationally accurate and reliable. For causes like this, we tend to blindly trust AI as the core proprietor of information. Unfortunately, this kind of dynamics creates a rather worrying informational environment. Not only is AI operationally deemed efficient, but it also reshapes how individuals and institutions manage information, risk, and accountability. AI’s vantage implies how it can legitimize information as true and credible.

In the case of the Philippines, AI collides into the country’s educational and political landscape, making it a highly contestable ground for information warfare. Interestingly, most of the Philippine population treat social media as a primary source of information given its accessibility among the public. Within this informational environment, disinformation is rampant. In Philippine’s case, disinformation is merely not a simple event of erroneous Facebook postings, but an organized circle composed of online network trolls. With this, AI-manufactured and supported disinformation increases the Philippine population’s exposure to injustices of knowing. This injustice extends to how Philippine voters decide and perceive officials, which may undermine democratic system. AI’s intersection with Philippine education and politics is worth mentioning, as low level digital literacy remains a pressing issue which makes spotting disinformation more difficult. This makes the Philippine population vulnerable to this structural and political problem. 

Additionally, the prevalence of AI-generated falsehoods highlights an increasing fragility in knowledge work. Processes which were once dependent on relatively stable forms of credibility like the citation and attribution now demand a continual process of validation. In this case, we are continually trapped to weigh in the validity of any AI-produced content and make decisions about how much to believe. Consequently, AI hallucination calls for information verification, which indicates our job to employ adaptive approaches to verify and double-check claims made through AI. Powering the educational system through Media and Information Literacy (MIL), we can gain the tools needed to navigate and engage with knowledge work online. Nevertheless, such strategies are far from universal or sufficient, especially when confronted with structural barriers to learning.

On the other hand, it would be simplistic to consider AI only as a means for disseminating misinformation. Generative AI has contradictory roles in the information environment. AI may fight against misinformation on one side, while, at the same time, it may worsen the situation. Rather than being used exclusively as a tool for one purpose, AI should be viewed as a multi-functional system that informs, persuades, regulates, and educates us. In similar ways, we must see AI as an intermediary between information, misinformation, and disinformation.

In conclusion, this AI phenomenon does not only revolutionize knowledge work and distribution, but it also reshapes how we view information in general.  AI has ultimately skewed our relationship with information and the foregoing trust and authority. Knowledge, at this crucial time, rests on the co-production between us and these big machine-operated language models. Thus, within knowledge production, AI operates as co-author, not only of knowledge, but of errors.

Cite this article in APA as: Jalagat, J. R. (2026, April 28). AI as co-author of error? Disinformation, verification, and the fragility of knowledge work. Information Matters. https://informationmatters.org/2026/04/ai-as-co-author-of-error-disinformation-verification-and-the-fragility-of-knowledge-work/

Author

  • Joseph Ryann J. Jalagat completed his MA in Communication Arts at the University of the Philippines Los Baños and is currently pursuing his PhD at UP Diliman. He has been a fellow of several national creative writing workshops, and his creative and academic works can be read online. His research interests are gender and cultural discourse, media, popular culture, political communication, sex communication, literature, and education. He currently teaches at Far Eastern University–Manila.

    View all posts Lecturer

Joseph Ryann Jalagat

Joseph Ryann J. Jalagat completed his MA in Communication Arts at the University of the Philippines Los Baños and is currently pursuing his PhD at UP Diliman. He has been a fellow of several national creative writing workshops, and his creative and academic works can be read online. His research interests are gender and cultural discourse, media, popular culture, political communication, sex communication, literature, and education. He currently teaches at Far Eastern University–Manila.