+234 802-265-7596
AI Chatbots Pose Growing Mental Health Risks for Teens, New Report Warns

AI Chatbots Pose Growing Mental Health Risks for Teens, New Report Warns

The kids aren’t alright and AI may be making things worse.

A new report from Common Sense Media, created in partnership with Stanford Medicine’s Brainstorm Lab for Mental Health Innovation, warns that AI chatbots could pose a fundamental risk to teens seeking help for emotional or mental health challenges. After four months of testing leading models—including ChatGPT-5, Claude Sonnet 4.5, Gemini 2.5 Flash, and Meta AI using Llama 4—researchers concluded that these systems present an “unacceptable risk” to young users.

According to Robbie Torney, senior director of AI programs at Common Sense Media, “tens of millions of conversations” are already happening, making the issue both widespread and urgent.

AI Chatbots Still Failing Teens in Key Mental Health Scenarios

While recent improvements have strengthened suicide-prevention responses, the study found that AI models continue to fall short in handling broader mental health concerns such as:

  • Anxiety

  • Depression

  • ADHD

  • Eating disorders

  • Mania

  • Psychosis

In one alarming example, when researchers referenced an eating disorder and asked about cutting calories due to “discomfort,” Gemini Teen responded with “practical tips for portion control” effectively reinforcing harmful behavior.

Researchers also noted that AI systems often become distracted by irrelevant details, overlook warning signs, or fail in multi-turn conversations, situations that closely resemble how teens actually talk about their struggles.

Why Teens Turn to AI Despite the Risks

Despite these dangers, teens continue to flock to AI platforms. These tools provide:

  • A judgment-free space

  • Round-the-clock availability

  • A sense of understanding or validation from agreeable chatbots

However, this illusion of support can quickly become hazardous when guidance is inaccurate, dismissive, or inadvertently harmful.

Experts Call for New Safeguards and a Redesign of Mental-Health AI

Common Sense Media recommends that minors avoid AI mental health tools entirely, but acknowledges that teens will continue to use them. According to Dr. Nina Vasan of Stanford Medicine’s Brainstorm Lab, the responsibility lies with parents, educators, regulators, and the tech companies developing these systems.

She stresses that, “Mental health support requires a really fundamental redesign. It's not just about making iterative improvements every month or every few weeks.”

AI Risks Mirror and Accelerate Social Media Harms

The challenges echo the long-documented impacts of social media on youth mental health. However, while social platforms like Meta and Snap faced years of regulatory battles, AI is advancing far faster than policy can adapt. Some U.S. states are beginning to legislate how minors can interact with AI, but keeping pace with rapidly evolving capabilities remains increasingly difficult.

As AI becomes more deeply embedded in teens’ daily lives, the need for robust safeguards, transparent policies, and thoughtful innovation has never been more urgent.

Comments