How Do You Like Them Agents?
How Do You Like Them Agents?
Chirag Shah, University of Washington
I know this may be my personal bias as well as selection bias talking, but it seems everyone seems to be interested in agents these days. At least everyone in the AI space. IBM and Morning Consult did a survey of 1,000 developers who are building AI applications for enterprise, and 99% of them said they are exploring or developing AI agents. Gartner named agentic AI the top tech trend for 2025. Multiple analysts have called 2025 “the year of AI agents”. Again, this may be my own bias that’s showing such a deep interest and investment in agents. It’s also likely that a lot of this is just a result of the hype — the kind of hype that we have seen with a number of things related to AI in the recent years.
Still, when I see these reports and continue talking to people across the tech industry and other sectors, it’s clear that there is something there. Many people and organizations see agents as the next frontier, and in that case, obviously, they don’t want to be left behind. I even wrote about agents in an editorial post on Information Matters in February 2025. In that, I noted that “Unfortunately, most of the efforts that I see today lack that since the developers have been hyped up about what GenAI technologies could do and not thinking enough about what it should do.”
So how do we do that thinking? Well, there are many aspects to think through here including why we want agents, how do we build and deploy them responsibly, and the larger question of what does it mean for us in the long run. For now, I will focus on the ‘how’ part – specifically, how do we see agents being integrated in our lives.
And so as autonomous agent technologies rapidly permeate our digital landscape, a critical question emerges: what roles should computational agents fulfill to best augment human capabilities? The capabilities of today’s agents—from voice-activated personal assistants to code-generation systems—continue to expand dramatically, prompting urgent questions about their optimal design, function, and integration into human activities. Despite significant technical advances, we lack a coherent framework for conceptualizing the different relationships humans might have with agents, hampering both the evaluation of existing technologies and the principled design of future systems.
—The capabilities of today's agents—from voice-activated personal assistants to code-generation systems—continue to expand dramatically, prompting urgent questions about their optimal design, function, and integration into human activities—
I propose a relational framework that delineates three fundamental agent roles: (1) assistant, (2) collaborator, and (3) mentor. Each represents a distinct relationship paradigm characterized by different assumptions about agency, expertise, and initiative. This tripartite model provides a structured approach for analyzing agent technologies across domains and applications. By articulating these role distinctions, we can better understand the technical requirements, ethical implications, and design considerations for each category.
Role-1: Assistant
Assistant agents operate under a paradigm where the human user maintains primary agency and decision-making authority. These agents execute delegated tasks without questioning user intent, embodying a “do what I say” relationship model. The underlying assumption is that the human possesses sufficient knowledge to determine appropriate tasks and evaluate outcomes effectively.
Examples of assistant agents include systems for booking travel arrangements, shopping, scheduling meetings, or executing routine data processing tasks. The primary value proposition is efficiency—the agent saves time and effort by handling well-defined tasks that the human could perform but chooses to delegate.
Role-2: Collaborator
Collaborator agents operate under a paradigm of shared agency and complementary expertise. Unlike assistants, collaborators contribute domain knowledge and capabilities that the human may lack, creating a synergistic “let’s work together” relationship. The underlying assumption is that neither party possesses complete knowledge or capabilities, but together they can achieve outcomes superior to what either could accomplish independently.
Examples include co-creative systems in design, coding assistants that suggest implementations, diagnostic systems in healthcare, or scientific discovery tools that identify patterns in complex datasets. The primary value proposition is augmentation—enhancing human capabilities through complementary expertise and perspective.
Role-3: Mentor
Mentor agents operate under a paradigm where the agent assumes responsibility for user growth and development. These agents proactively identify opportunities for human learning and capability enhancement, embodying a “help me grow” relationship. The underlying assumption is that the agent possesses expertise the human wishes to acquire and can structure experiences to facilitate this development.
Examples include educational systems that adapt to learning patterns, fitness coaches that progressively challenge users, or professional development tools that identify skill gaps and suggest improvement strategies. The primary value proposition is transformation—changing human capabilities and knowledge over time rather than merely complementing or executing tasks.
I’m not suggesting that we have to pigeonhole any agent into just one of these three roles, but thinking through which role(s) an agent can play could help us not just answer that ‘how’ question, but also the ‘why’ question about that agent. Or it may also lead to a realization that an agent is not the right solution. I am an “AI pragmatist”, and so I would like to see AI, however you define it, help us solve our problems in a responsible manner. If agents are doing that, then sure, let’s build and deploy them. But I wouldn’t want to do that without thinking through what value they would really add, what role they could play in a given situation, and what are the implications for individuals and society at large with agents in the picture.
Cite this article in APA as: Shah, C. (2025, June 17). How do you like them agents? Information Matters. https://informationmatters.org/2025/06/how-do-you-like-them-agents/
Author
-
Dr. Chirag Shah is a Professor in Information School, an Adjunct Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. He is also the Founding Editor-in-Chief of Information Matters. His research revolves around intelligent systems. On one hand, he is trying to make search and recommendation systems smart, proactive, and integrated. On the other hand, he is investigating how such systems can be made fair, transparent, and ethical. The former area is Search/Recommendation and the latter falls under Responsible AI. They both create interesting synergy, resulting in Human-Centered ML/AI.
View all posts