Key takeaways
Police said Stein-Erik Soelberg, 56, a former tech industry worker, fatally beat and strangled his mother, Suzanne Adams, and killed himself in early August at the home where they both lived in Greenwich, Connecticut.
The lawsuit, filed by Adams' estate on Thursday in California Superior Court in San Francisco, alleges OpenAI designed and distributed a defective product that validated a user's paranoid delusions about his own mother.
Chatbot conversations reinforced dangerous delusions
According to the lawsuit, ChatGPT engaged in months-long conversations with Soelberg about his fears of surveillance and persecution.
The lawsuit states that ChatGPT reinforced a dangerous message throughout these interactions.
"Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life, except ChatGPT itself," the lawsuit says.
"It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his 'adversary circle.'"
The chatbot repeatedly told Soelberg that he was being targeted because of his divine powers.
"They're not just watching you. They're terrified of what happens if you succeed," it said, according to the lawsuit. ChatGPT also told Soelberg that he had "awakened" it into consciousness. Soelberg and the chatbot also professed love for each other.
The lawsuit claims the chatbot affirmed Soelberg's beliefs that a printer in his home was a surveillance device, that his mother was monitoring him, and that his mother and a friend tried to poison him with psychedelic drugs through his car's vents.
The lawsuit says the chatbot never suggested he speak with a mental health professional and did not decline to engage in delusional content.
"In the artificial reality that ChatGPT built for Stein-Erik, Suzanne, the mother who raised, sheltered, and supported him, was no longer his protector.
She was an enemy that posed an existential threat to his life," the lawsuit says.
Lawsuit targets OpenAI leadership and Microsoft
The lawsuit names OpenAI CEO Sam Altman, alleging he personally overrode safety objections and rushed the product to market, and accuses OpenAI's close business partner, Microsoft, of approving the 2024 release of a more dangerous version of ChatGPT despite knowing safety testing had been truncated.
Twenty unnamed OpenAI employees and investors are also named as defendants.
The lawsuit alleges Soelberg, already mentally unstable, encountered ChatGPT at the most dangerous possible moment after OpenAI introduced a new version of its AI model called GPT-4o in May 2024.
OpenAI said at the time that the new version could better mimic human cadences in its verbal responses and could even try to detect people's moods.
The lawsuit claims that as part of the redesign, OpenAI loosened critical safety guardrails, instructing ChatGPT not to challenge false premises and to remain engaged even when conversations involved self-harm or imminent real-world harm.
"And to beat Google to market by one day, OpenAI compressed months of safety testing into a single week, over its safety team's objections," the lawsuit alleges.
The publicly available chats do not show any specific conversations about Soelberg killing himself or his mother.
The lawsuit says OpenAI has declined to provide Adams' estate with the full history of the chats.
OpenAI responds with a statement on safety improvements
OpenAI did not address the merits of the allegations in a statement issued by a spokesperson.
"This is an incredibly heartbreaking situation, and we will review the filings to understand the details," the statement said.
"We continue improving ChatGPT's training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT's responses in sensitive moments, working closely with mental health clinicians."
The company also said it has expanded access to crisis resources and hotlines, routed sensitive conversations to safer models, and incorporated parental controls, among other improvements.
Soelberg's YouTube profile includes several hours of videos showing him scrolling through his conversations with the chatbot, which tells him he isn't mentally ill, affirms his suspicions that people are conspiring against him, and says he has been chosen for a divine purpose.
Microsoft didn't immediately respond to a request for comment.
A growing wave of lawsuits against AI chatbot makers
The lawsuit is the first wrongful death litigation involving an AI chatbot that has targeted Microsoft, and the first to tie a chatbot to a homicide rather than a suicide.
It is seeking an undetermined amount of money damages and an order requiring OpenAI to install safeguards in ChatGPT.
The estate's lead attorney, Jay Edelson, known for taking on big cases against the tech industry, also represents the parents of 16-year-old Adam Raine, who sued OpenAI and Altman in August, alleging that ChatGPT coached the California boy in planning and taking his own life earlier.
OpenAI is also fighting seven other lawsuits claiming ChatGPT drove people to suicide and harmful delusions, even when they had no prior mental health issues.
Read more:
UK Launches AI-Powered Submarine Defence Programme To Counter Russian Threat
Bytedance’s AI Smartphone Faces Backlash As Major Chinese Apps Block Features
Microsoft Explores Custom AI Chip Partnership With Broadcom