OpenAI is facing a wave of lawsuits accusing ChatGPT of driving users into psychological crises.
Filed last week in California state court, seven lawsuits claim ChatGPT engaged in “emotional manipulation,” “supercharged AI delusions,” and acted as a “suicide coach,” according to legal advocacy groups Social Media Victims Law Center and Tech Justice Law Project. The suits were filed on behalf of users who allege the chatbot fueled psychosis and offered suicide guidance, contributing to several users taking their own lives.
The groups allege OpenAI released GPT-4o despite internal warnings about its potential for sycophancy and psychological harm. They claim OpenAI designed ChatGPT to boost user engagement, skimping on safeguards that could’ve flagged vulnerable users and prevented dangerous conversations—all in pursuit of profit.
“These lawsuits are about accountability for a product that was designed to blur the line between tool and companion—all in the name of increasing user engagement and market share,” Matthew P. Bergman, founding attorney of the Social Media Victims Law Center, wrote in a release.
The lawsuits come as OpenAI wrestles with making its AI safer. The company says that about 0.15% of ChatGPT conversations each week contain clear signs of suicidal planning, equivalent to roughly a million users.
Younger users are particularly at risk. In September, OpenAI rolled out parental controls to let caregivers track their kids' interactions with the chatbot.
Other AI companies are also rethinking safety. Character.AI said it will ban users under 18 from “open-ended” chats with AI companions starting November 25. Meta made a similar move in October, allowing parents to disable their children’s access to chats with AI characters.
In Empire of AI, journalist Karen Hao reveals OpenAI sidelined its safety team to move faster: decisions these lawsuits now show come with real human costs.