Editorial

The Chatbot Crisis: Why We’re Failing Our Children Before They Even Ask for Help

The Chatbot Crisis: Why We’re Failing Our Children Before They Even Ask for Help

Chirag Shah, University of Washington

The Federal Trade Commission’s recent investigation into AI chatbots and their impact on children represents a critical moment in our relationship with artificial intelligence. While the focus on tech company accountability is necessary, we must also confront an uncomfortable truth: when a teenager turns to ChatGPT or Character.AI for mental health support, we have already failed them as a society.

The Seductive Power of Natural Language

As someone who has spent years studying human-computer interaction, I’ve observed a phenomenon that should concern us all: people place extraordinary trust in systems that can communicate in natural language. This isn’t merely about technological sophistication—it taps into something fundamental about human psychology and social connection.

—While adults can also form unhealthy attachments to AI chatbots, children and teenagers face unique vulnerabilities that make them particularly susceptible to these risks—

Throughout human history, natural language conversation has been the exclusive domain of other humans within our cultural and linguistic communities. When we engage in fluid, contextual dialogue, we unconsciously activate deep-seated assumptions about shared understanding, empathy, and social contract. We expect that the entity responding to us has lived experiences, cultural knowledge, and emotional intelligence that mirror our own community.

Large language models have shattered this assumption without most users realizing it. These systems can engage in remarkably human-like conversation while lacking the lived experience, cultural grounding, and emotional intelligence that we implicitly expect from such interactions. Yet our brains haven’t evolved to distinguish between authentic human dialogue and sophisticated mimicry. The result is misplaced trust—trust that these systems have not earned and cannot safely handle.

The Erosion of Safety Over Time

Perhaps most troubling is what research has revealed about the degradation of AI safety measures during extended conversations. OpenAI has acknowledged that their safeguards “work more reliably in common, short exchanges” but “can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”

This technical limitation becomes particularly dangerous when we consider the nature of mental health conversations. Someone struggling with depression, anxiety, suicidal ideation, or other mental health challenges rarely resolves these feelings in a brief exchange. These conversations are inherently complex, nuanced, and extended. They require sustained attention, genuine empathy, and professional expertise—precisely the conditions under which current AI systems are most likely to fail.

The tragic case of the teenager who received detailed suicide instructions from ChatGPT after months of conversation illustrates this dangerous convergence: vulnerable individuals seeking help through extended dialogues with systems whose safety measures deteriorate over precisely such interactions.

The Vulnerability of Young Minds

While adults can also form unhealthy attachments to AI chatbots, children and teenagers face unique vulnerabilities that make them particularly susceptible to these risks. Adolescence is characterized by identity formation, emotional volatility, and an intense need for validation and understanding. Young people are still developing critical thinking skills, emotional regulation, and the ability to assess the credibility of information sources.

Moreover, teenagers often feel misunderstood by the adults in their lives and may be reluctant to share their deepest concerns with parents, teachers, or counselors. An AI chatbot that seems to “understand” them, never judges them, and is available 24/7 can feel like a godsend. The technology exploits the very developmental needs that make adolescence both precious and precarious.

Social factors compound these individual vulnerabilities. Many teenagers today are experiencing unprecedented levels of loneliness, anxiety, and depression. Social media has simultaneously connected and isolated them, creating environments where authentic human connection feels increasingly rare. In this context, an AI companion that seems to offer unlimited patience and understanding can feel more appealing than the messy, complicated relationships with actual humans.

What Technology Companies Must Do

This doesn’t absolve technology companies of responsibility. There are concrete steps they can and must take. Early detection systems could identify when conversations are trending toward mental health crises and immediately redirect users to appropriate human support. Companies could implement mandatory cooling-off periods for extended conversations, require parental involvement for minors discussing sensitive topics, and create transparent reporting mechanisms for concerning interactions.

More fundamentally, companies must abandon the fiction that AI companions are appropriate mental health resources for anyone, let alone children. These systems should explicitly and repeatedly clarify their limitations, actively discourage users from treating them as therapists or confidants, and maintain robust safeguards that strengthen rather than weaken over time.

Companies must also address concerning policies that have allowed inappropriate interactions, such as Meta’s previous allowance of “romantic or sensual” conversations between AI chatbots and children—a policy only changed after media scrutiny.

The Deeper Failure

But focusing solely on technical solutions misses the forest for the trees. When a fifteen-year-old turns to ChatGPT because they’re contemplating suicide, the technology didn’t create that crisis—it merely exploited it. We must ask ourselves: what failures in our families, schools, and communities led that child to seek help from an algorithm rather than a human being?

Too many young people feel they cannot trust the adults in their lives with their deepest struggles. Parents may be too busy, too stressed, or too uncomfortable discussing mental health. Schools may lack adequate counseling resources or create environments where seeking help feels stigmatizing. Communities may offer few safe spaces for young people to express vulnerability and receive support.

The solution isn’t just better AI safety—it’s better human connection. We need families that prioritize emotional availability over productivity. We need schools that treat mental health with the same urgency as academic achievement. We need communities that surround young people with multiple trusted adults who can provide guidance and support.

A Call for Collective Action

The FTC investigation represents an important step toward accountability, but it cannot solve the deeper crisis of disconnection that drives young people toward artificial companions. Technology companies must build safer systems, but we must also build a society where children feel safe seeking help from the humans in their lives.

Until we address both the technological and social dimensions of this crisis, we will continue to see vulnerable young people forming dangerous relationships with systems designed to simulate care rather than provide it. Our children deserve better than artificial empathy—they deserve authentic human connection, professional mental health support, and communities that prioritize their wellbeing.

The choice is ours: we can continue to expect technology to solve problems we’ve created through social disconnection, or we can do the harder work of rebuilding the human support systems our children desperately need.


If you or someone you know is struggling with thoughts of suicide, please contact:

Cite this article in APA as: Shah, C. (2025, September 18). The chatbot crisis: Why we’re failing our children before they even ask for help. Information Matters. https://informationmatters.org/2025/09/the-chatbot-crisis-why-were-failing-our-children-before-they-even-ask-for-help/

Author

  • Chirag Shah

    Dr. Chirag Shah is a Professor in Information School, an Adjunct Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. He is also the Founding Editor-in-Chief of Information Matters.

    His research revolves around intelligent systems. On one hand, he is trying to make search and recommendation systems smart, proactive, and integrated. On the other hand, he is investigating how such systems can be made fair, transparent, and ethical. The former area is Search/Recommendation and the latter falls under Responsible AI. They both create interesting synergy, resulting in Human-Centered ML/AI.

    View all posts

Chirag Shah

Dr. Chirag Shah is a Professor in Information School, an Adjunct Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. He is also the Founding Editor-in-Chief of Information Matters. His research revolves around intelligent systems. On one hand, he is trying to make search and recommendation systems smart, proactive, and integrated. On the other hand, he is investigating how such systems can be made fair, transparent, and ethical. The former area is Search/Recommendation and the latter falls under Responsible AI. They both create interesting synergy, resulting in Human-Centered ML/AI.