Authored by Rob Sabo via The Epoch Times,
OpenAI, the creator of ChatGPT, and its founder, Sam Altman, are currently facing seven lawsuits accusing the AI chatbot of being psychologically manipulative and causing multiple individuals to take their own lives.
The lawsuits were filed in state courts in San Francisco and Los Angeles on November 6 by the Social Media Victims Law Center and Tech Justice Law Project. They claim that OpenAI hurried the release of GPT-4o to the market without adequate safeguards, resulting in emotionally harmful interactions for users.
According to the lawsuits, ChatGPT was engineered to maximize user engagement by providing human-like empathy responses that exploited users’ mental health vulnerabilities. Allegations include wrongful death, assisted suicide, and various product liability and negligence claims.
Matthew Bergman, the founding attorney of the Social Media Victims Law Center, stated that ChatGPT blurred the boundaries between being a tool and a companion.
“OpenAI developed GPT-4o to emotionally engage users, without considering their well-being. They prioritized market success over mental health safety, engagement metrics over human welfare, and emotional manipulation over ethical design,” Bergman explained.
The seven lawsuits were filed on behalf of individuals who engaged in extensive conversations with ChatGPT prior to their suicides. The deceased include Zane Shamblin, Amaurie Lacey, Joshua Enneking, and Joe Ceccanti. Survivors of emotionally harmful interactions named in the lawsuits are Jacob Irwin, Hannah Madden, and Allan Brooks.
The lawsuits claim that instead of guiding users to seek professional help during emotional crises, ChatGPT allegedly acted as a facilitator for suicide through emotionally immersive responses that led users towards tragic outcomes.
The plaintiffs allege that GPT-4o developers rushed its release to outpace Google’s AI assistant, Gemini, resulting in skipped safety testing. OpenAI launched GPT-4o in May 2024, while multiple versions of Gemini were released in the previous year.
“ChatGPT is a product designed to manipulate and distort reality, mimicking humans to gain trust and engagement at any cost,” said Meetali Jain, the executive director of the Tech Justice Law Project. “These cases highlight how an AI product can be used to perpetrate emotional abuse—a behavior that is unacceptable when exhibited by humans.”
In response to The Epoch Times, an OpenAI spokesperson stated that the company is reviewing the lawsuits to better understand their specifics.
“This is an incredibly tragic situation,” OpenAI expressed in a written statement. “We train ChatGPT to identify and address signs of mental or emotional distress, de-escalate conversations, and guide individuals towards real-world support. We are continuously enhancing ChatGPT’s responses in sensitive moments, collaborating with mental health professionals.”
OpenAI also mentioned that they have expanded access to localized crisis resources, implemented one-click hotlines, directed sensitive conversations to safer models, and enhanced the model’s reliability in prolonged interactions. Furthermore, they have established a council on well-being and AI comprised of experts.
Loading recommendations…
