Is AI the New Plastic?
Is AI the New Plastic?
Chirag Shah, University of Washington
I was invited to be a debater at a debate organized by the New York Times at VivaTech 2025 conference in Paris in June. The topic was “Is AI the New Plastic?” This was very intriguing for me since it wasn’t about AI being good or bad, but instead thinking through its benefits, costs, and most importantly, our responsibility with its safe deployment and use. After some consideration, I chose the “for” side, which means my team argued that yes, AI is the new plastic. You can watch the debate here. Spoiler alert: my team won! More than what any of us really believed in or who won the debate, it was really an interesting and insightful event. As the opener for the “for” team, I led with the following remarks.
—We are sleepwalking into the same trap that gave us climate change, ocean pollution, and microplastics in our food chain—
Yes, AI is the new plastic because our relationship to it is proving just as short-termist. There isn’t enough evidence to suggest that we are planning, as a planet, for AI’s long-term risks. We’re governed mostly, in fact, by what AI is doing now.
Let me give you three stark examples of this dangerous parallel.
First, the deployment rush mirrors plastic’s early adoption. Just as plastic was rapidly integrated into everything from food packaging to furniture in the 1950s without understanding microplastics or ocean pollution, we’re embedding AI into critical systems—healthcare diagnostics, criminal justice algorithms, financial lending—without fully grasping the long-term societal consequences. Companies are racing to deploy AI solutions because they work today, not because we understand what happens when these systems scale globally over decades.
Second, the waste problem is already emerging. Plastic gave us convenience but created islands of garbage in our oceans. AI is creating mountains of digital waste—energy consumption that rivals entire countries, discarded datasets filled with bias, and obsolete models that leave behind discriminatory patterns embedded in institutions. The environmental cost of training a single large language model equals the lifetime emissions of several cars, yet we’re churning out new models monthly.
Third, the regulatory lag is identical. It took 70 years to start seriously regulating plastic, and even now, microplastics contaminate our bloodstreams while governments debate solutions. With AI, we’re seeing the same pattern: algorithmic bias is already reshaping hiring and lending, deepfakes are destabilizing democracy, and automated systems are making life-altering decisions with little oversight. Yet our regulatory frameworks are years behind the technology.
The fundamental problem is that both plastic and AI offer immediate, tangible benefits that blind us to systemic risks. Plastic revolutionized food safety and manufacturing. AI is revolutionizing productivity and decision-making. But in both cases, we’re prioritizing short-term gains over long-term consequences.
Just as we now recognize that every piece of plastic ever made still exists somewhere on Earth, we must acknowledge that every AI system we deploy today will leave traces in our social, economic, and political systems for generations. The algorithms training on biased data today will influence hiring decisions in 2040. The surveillance systems we’re normalizing now will define privacy expectations for our children.
This isn’t an argument against AI any more than recognizing plastic’s dangers was an argument against innovation. The point is the pattern—the dangerous pattern of embracing transformative technologies while ignoring their long-term costs. We are sleepwalking into the same trap that gave us climate change, ocean pollution, and microplastics in our food chain.
AI is the new plastic because we’re making the same catastrophic mistake: prioritizing immediate utility over generational consequences. And just like with plastic, by the time we fully understand what we’ve unleashed, it may be too late to contain it. The alarm bells are ringing—the question is whether we’ll listen this time, or whether future generations will look back at 2025 the same way we now look back at 1950: as the moment we chose convenience over caution.
Cite this article in APA as: Shah, C. (2025, August 13). Is AI the new plastic? Information Matters. https://informationmatters.org/2025/08/is-ai-the-new-plastic/
Author
-
Dr. Chirag Shah is a Professor in Information School, an Adjunct Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. He is also the Founding Editor-in-Chief of Information Matters. His research revolves around intelligent systems. On one hand, he is trying to make search and recommendation systems smart, proactive, and integrated. On the other hand, he is investigating how such systems can be made fair, transparent, and ethical. The former area is Search/Recommendation and the latter falls under Responsible AI. They both create interesting synergy, resulting in Human-Centered ML/AI.
View all posts