Oil and Water: Why We Need to Stop Forcing Human-AI “Collaboration”
Oil and Water: Why We Need to Stop Forcing Human-AI “Collaboration”
Chirag Shah, University of Washington
The technology industry loves a good metaphor. We’re told AI is our “copilot,” our “teammate,” our “collaborative partner.” From ChatGPT to GitHub Copilot, the narrative is clear: humans and AI working together in perfect harmony. But what if this entire framing is fundamentally misguided? What if, instead of partners in a dance, we’re trying to mix oil and water?
Recent debates have oscillated between two extremes. On one side, AI evangelists promise that agents will revolutionize every job, automating most knowledge work within years. On the other, skeptics like Emily Bender and Timnit Gebru rightfully point out that AI cannot replace human expertise in fields like nursing, teaching, or social work. The compromise position—human-AI collaboration—has emerged as the seemingly sensible middle ground. But this compromise may be built on shaky foundations.
—Humans and AI may be fundamentally incompatible as information partners—
The problem isn’t that AI is bad at certain tasks or that collaboration tools need better interfaces. The problem runs deeper: humans and AI may be fundamentally incompatible as information partners. They operate on such different planes of existence that true collaboration, in any meaningful sense, becomes conceptually incoherent. Consider three dimensions where this incompatibility becomes starkly visible.
The Epistemic Chasm
Human information seeking is fundamentally about resolving genuine uncertainty. Research on information behavior demonstrates that when people seek information, they engage in complex sense-making processes that transform their understanding of the world. Brenda Dervin’s influential work characterizes information seeking as a way humans bridge gaps in their knowledge, fundamentally changing their epistemic states through active interpretation and meaning construction. Wilson describes this as activities where individuals identify needs, search for information, and integrate findings into their existing mental models, creating new understanding through this iterative process.
AI systems, conversely, perform pattern recognition without any epistemic experience whatsoever. The “stochastic parrots” critique articulated by Bender and colleagues remains salient: language models operate by assembling sequences of linguistic forms based on statistical patterns in training data, fundamentally lacking reference to meaning or understanding. These systems don’t resolve uncertainty because they don’t experience uncertainty. They execute computational operations that may superficially resemble answers, but no knowledge state changes occur within the machine. The appearance of understanding is precisely that—appearance.
This isn’t a technical limitation to be engineered away. It’s an ontological difference. Humans construct meaning; AI processes tokens. These are categorically distinct activities masquerading as similar ones.
The Temporal Mismatch
Information seeking for humans unfolds as an extended temporal process. Research consistently shows that people engage in iterative exploration, gradually refining their understanding through multiple encounters with information sources. Ellis’s behavioral framework identifies stages like chaining, browsing, and extracting that occur over time as understanding deepens. Kuhlthau’s model emphasizes how affective, cognitive, and physical dimensions evolve throughout the search process, with uncertainty gradually giving way to focus and then closure. This temporal unfolding is essential to human learning—we don’t simply receive information, we develop it through sustained engagement.
AI agents compress information provision into instantaneous outputs. The request arrives, computation occurs, and results appear—measured in milliseconds. There is no process, only product. No journey, only destination. While AI can be prompted multiple times in sequence, each interaction remains fundamentally discrete, lacking the coherent developmental trajectory that characterizes human information seeking.
This temporal incompatibility creates profound mismatches in expectations and interaction patterns. Humans anticipate evolving conversations where understanding accumulates; AI provides discrete transactions where context must be artificially maintained through engineering tricks rather than genuine continuity of experience.
The Agentic Divide
Perhaps most fundamentally, humans possess genuine informational needs arising from their lived experience in the world. When someone seeks information about a medical condition, career options, or how to repair their car, they have authentic stakes in the outcome. Their uncertainty is real, their motivation intrinsic, their relationship to the information phenomenologically rich. As research on scholarly information seeking demonstrates, even professional information needs emerge from complex motivations including career advancement, disciplinary values, and desires to advance knowledge in ways that matter to the individual.
AI agents have no needs. They optimize objective functions defined externally. When an AI retrieves information or generates a response, nothing hangs in the balance for the machine. It experiences no anxiety about making wrong decisions, no satisfaction from solving problems, no curiosity driving exploration. The system is triggered by external prompts and executes according to its programming. The anthropomorphic language we use—the system “wants” to help, “tries” to understand, “learns” from feedback—obscures this fundamental reality.
Beyond the Collaboration Fantasy
Why does this matter? Because framing human-AI interaction as collaboration leads to misguided design choices, unrealistic expectations, and potentially harmful deployments. When we build systems assuming symmetric partnership between humans and AI, we ignore the profound asymmetry in how these entities process information, exist in time, and relate to knowledge.
The research agenda emerging from this incompatibility perspective isn’t about abandoning AI or retreating to pure human judgment. Instead, it calls for incompatibility-aware design—systems that explicitly acknowledge and work with these fundamental differences rather than papering over them with collaboration metaphors.
What might this look like practically? Design patterns that treat AI as sophisticated computational infrastructure rather than teammates. Interfaces that clearly distinguish between human sense-making and algorithmic pattern-matching. Evaluation frameworks that assess how well systems support human agency rather than how seamlessly they blend human and machine contributions into indistinguishable outputs.
The CSCW community has debated whether human-AI partnerships constitute genuine collaboration deserving study within Computer-Supported Cooperative Work frameworks. Some argue these partnerships lack the mutual intentionality, shared understanding, and symmetric agency that define true collaboration. Perhaps this debate itself reveals the problem: we’re trying to force AI into conceptual frameworks designed for human-human interaction, bending both the technology and our understanding of collaboration to maintain an increasingly strained metaphor.
Oil and water don’t mix, no matter how vigorously we shake the container. The solution isn’t better emulsifiers—it’s recognizing that some substances simply occupy different planes of existence. Humans and AI aren’t bad collaboration partners because we haven’t figured out the right interface yet. They’re not collaboration partners at all. And that’s okay. In fact, it might be essential to acknowledge this truth if we want to build information systems that genuinely serve human flourishing rather than simply perpetuating comforting fictions about our relationship with machines.
The path forward requires intellectual honesty about what AI is and isn’t, what it can and cannot be. Not every powerful tool needs to be a partner. Sometimes the most respectful relationship is one that acknowledges profound difference rather than insisting on impossible similarity.
Cite this article in APA as: Shah, C. (2025, October 14). Oil and water: Why we need to stop forcing human-AI “collaboration.” Information Matters. https://informationmatters.org/2025/10/oil-and-water-why-we-need-to-stop-forcing-human-ai-collaboration/
Author
-
Dr. Chirag Shah is a Professor in Information School, an Adjunct Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. He is also the Founding Editor-in-Chief of Information Matters. His research revolves around intelligent systems. On one hand, he is trying to make search and recommendation systems smart, proactive, and integrated. On the other hand, he is investigating how such systems can be made fair, transparent, and ethical. The former area is Search/Recommendation and the latter falls under Responsible AI. They both create interesting synergy, resulting in Human-Centered ML/AI.
View all posts