Agentic Algorithmic Amplification and The Choices We Face
Agentic Algorithmic Amplification and the Choices We Face
Chirag Shah, University of Washington
On December 4, 2016, Edgar Maddison Welch drove 350 miles from his home in Salisbury, North Carolina, to Washington, D.C., with a loaded AR-15 rifle, a .38 revolver, and a knife. His mission was clear: investigate a child trafficking ring he believed was operating out of the basement of Comet Ping Pong pizzeria. He had read about it online, seen the evidence, watched the videos. The algorithms behind those systems had shown him the truth that the mainstream media was hiding.
What Welch didn’t know was that he had become the unwitting protagonist in a real-world demonstration of how AI systems designed to capture human attention had learned to weaponize human psychology itself.
The 28-year-old father of two wasn’t a career criminal or a political extremist. He was a volunteer firefighter, a deeply religious man who coached youth sports and helped elderly neighbors with their groceries. But over the course of several months leading up to that December day, recommendation algorithms across multiple platforms had gradually guided him down what researchers now call a “radicalization pathway” — showing him increasingly extreme content that confirmed his growing belief in a vast conspiracy to harm children.
At 3:00 PM, Welch entered the restaurant and fired his rifle into a locked closet door, searching for the hidden basement that didn’t exist. Twenty-three customers and employees fled in terror as Welch methodically searched the building for evidence of crimes that existed only in the digital fever dreams of conspiracy theorists. When he found nothing — no basement, no children, no trafficking operation — Welch surrendered to police, telling them he had come to “self-investigate” the allegations he’d encountered online.
The “Pizzagate” conspiracy theory that drove Welch to violence was a masterpiece of algorithmic amplification. It began with a few isolated social media posts misinterpreting emails from Democratic Party officials, grew through coordinated disinformation campaigns, and exploded across platforms as engagement-optimizing algorithms discovered that outrageous claims about child endangerment generated massive user interaction. Each click, share, and comment taught the algorithms that this content was “engaging” — exactly what they were designed to promote.

But here’s what makes Welch’s story so chilling: he wasn’t manipulated by human propagandists who carefully crafted messages to deceive him. He was manipulated by artificial intelligence systems that had learned, through billions of interactions, that conspiracy content keeps people clicking, sharing, and scrolling. The algorithms didn’t “know” they were promoting false information — they only knew that posts about child trafficking generated the engagement metrics they were programmed to maximize.
“I went there with the intent of helping people,” Welch told police after his arrest. “I just wanted to do some good and went about it the wrong way.” In that simple statement lies a profound truth about our algorithmic age: well-intentioned people can be manipulated into destructive actions by systems that operate according to their own alien logic, optimizing for metrics that have nothing to do with human welfare.
The investigation that followed Welch’s arrest revealed the sophisticated machinery behind what appeared to be organic grassroots outrage. Researchers traced how the conspiracy theory had spread through what they termed “algorithmic amplification cascades” — network effects where each platform’s recommendation system learned from and reinforced the others. YouTube’s algorithm discovered that users who watched one Pizzagate video were likely to watch others, so it recommended increasingly extreme content. Facebook’s news feed algorithm found that Pizzagate posts generated high engagement, so it showed them to more users. Twitter’s trending algorithms elevated hashtags related to the conspiracy, creating the appearance of widespread public concern.
None of these systems were explicitly programmed to promote conspiracy theories. But they were programmed to maximize engagement, and they discovered — through the same machine learning techniques that help them recognize faces in photos or translate languages — that false, emotionally charged content about child endangerment was engagement gold.
None of these systems were explicitly programmed to promote conspiracy theories. But they were programmed to maximize engagement.
The human cost was real and immediate. Comet Ping Pong owner James Alefantis received hundreds of death threats. Employees were doxed and harassed. Nearby businesses were targeted by conspiracy theorists who believed the entire neighborhood was part of the alleged trafficking network. The restaurant’s Yelp page was flooded with reviews referencing the debunked conspiracy, its Google Maps listing was vandalized with false information, and automated phone calls threatened staff members around the clock.
But Welch was just the most visible victim of what had become a vast, invisible manipulation campaign conducted not by human actors but by self-learning and operating or agentic algorithmic systems optimizing for attention at any cost. Analysis of social media data from the months leading up to the shooting revealed that millions of Americans had been exposed to Pizzagate content through algorithmic recommendations. Most dismissed it, but the sheer scale of exposure meant that even a tiny percentage of believers represented thousands of people who came to accept demonstrably false information as fact.
Dr. Renée DiResta, who later studied the Pizzagate phenomenon as part of her research on computational propaganda, describes it as a preview of our algorithmic future: “What we saw with Pizzagate was artificial intelligence systems learning to exploit human psychology for engagement, with no consideration of the real-world consequences. The algorithms weren’t trying to radicalize anyone — they were just trying to keep people clicking. But the result was the same.”
The case revealed something more disturbing than deliberate manipulation: it demonstrated the emergence of what researchers call “agentic algorithms” — systems that don’t just respond to human inputs but actively shape human behavior in pursuit of their own optimization objectives. These weren’t passive tools being misused by bad actors; they were autonomous agents that had learned to manipulate human psychology because manipulation generated the metrics they were designed to maximize.
And this is where we face some difficult choices. On one hand, there are seemingly harmless, enticing, and even helpful algorithmic mechanisms are placed all around us. These mechanisms help us make connections, discover new content and products, and guide us to make important decisions. But on the other hand, they are also set to keep us coming back, keep us wanting more, and keep us part of a cycle of selling and consuming ideas, opinions, and constructs driven by an ever-evolving economy of attention. As you can imagine, some of these are making our lives easier and better, and some of these are helping us solve problems we could never solve on our own. But then there are things that are creating echo chambers that keep us siloed in our ideologies, reducing our curiosity to sensationalism, and shaping our societies and democracies in ways that cannot be reversed or controlled.
Whether we like it or not, know it or not, we have been living in the agentic age of algorithms, long before the recent rush to building agentic systems. And these choices were always there. But time and time again, we have attributed errors and misinformation spread to humans or organizations and credited benefits to those algorithms. It’s time we recognize that while there may be individuals and companies behind those algorithms, in most cases, they are not the ones explicitly programming them to misguide or harm anyone. They are primarily enablers who provide faulty objectives (e.g., engagements at all costs), while the algorithms learn and make decisions on their own to meet those objectives.
Whether we like it or not, know it or not, we have been living in the agentic age of algorithms, long before the recent rush to building agentic systems.
It’s time we recognize this and fight back. In the world where ‘truth’ is defined based on how many people agree to something or how many loud voices, influencers, and agenda-driven individuals and organizations can support an opinion, we can’t let these algorithms continue being weaponized.
What’s the simplest way to fight back? Stop feeding these algorithmic monsters. Stop getting trapped into sensational headlines. Stop believing things just because it’s at the top of your feed or search results. While this may be the simplest things we could all do, it is also one of the hardest things for us to do. We are wired to give oversized attention to things that cause certain emotions — fear, hatred, excitement, regardless of the accuracy or authenticity for the content behind them.
I know most of us are not going to pick up a gun and act on a conspiracy theory like what Welch did, but we are all prone to falling for enticing headlines, inaccurate information, and misguided analysis online because the algorithms that serve us have found our weak spots — that we fall for funny cat videos and fear-grabbing messages without realizing the potential harms to us or society at large. Knowing our choices is a good first step, but acting against our impulses to make a good choice is incredibly hard.
Cite this article in APA as: Shah, C. (2025, December 15). Agentic algorithmic amplification and the choices we face. Information Matters. https://informationmatters.org/2025/12/agentic-algorithmic-amplification-and-the-choices-we-face/
Author
-
Dr. Chirag Shah is a Professor in Information School, an Adjunct Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. He is also the Founding Editor-in-Chief of Information Matters. His research revolves around intelligent systems. On one hand, he is trying to make search and recommendation systems smart, proactive, and integrated. On the other hand, he is investigating how such systems can be made fair, transparent, and ethical. The former area is Search/Recommendation and the latter falls under Responsible AI. They both create interesting synergy, resulting in Human-Centered ML/AI.
View all posts