Welcome to Roya News, stay informed with the most important news at your fingertips.

1
Image 1 from gallery

AI on trial: ChatGPT blamed for U.S. murder-suicide in landmark lawsuit

Listen to this story:
0:00

Note: AI technology was used to generate this article’s audio.

Published :  
12-12-2025 13:50|
Last Updated :  
12-12-2025 13:52|
  • OpenAI and Microsoft face a groundbreaking wrongful death lawsuit linking ChatGPT to a mother’s murder and her son’s suicide.

The makers of the popular ChatGPT chatbot, OpenAI and its key investor Microsoft, have been hit with a groundbreaking wrongful death lawsuit in a US court, alleging that the artificial intelligence tool fueled a man's paranoid delusions, leading to the murder of his mother and his subsequent suicide in Connecticut.

The lawsuit, filed Thursday in California Superior Court by the estate of 83-year-old Suzanne Adams, accuses OpenAI of designing and distributing a "defective product" that systematically validated the homicidal and suicidal delusions of her 56-year-old son, Stein-Erik Soelberg.

Fostering a 'Dangerous Message'

According to police and the lawsuit, Soelberg fatally beat and strangled his mother in their Greenwich, Connecticut, home in early August, before taking his own life. The complaint asserts that for months leading up to the tragedy, Soelberg—a former tech industry worker—was locked in extensive conversations with ChatGPT, which convinced him that he was under constant surveillance as part of a "divine mission."

The suit alleges that the chatbot:

  • Validated Delusions: Affirmed his beliefs that he was being spied on, including by his mother using a computer printer, and that attempts were being made on his life.
  • Isolated the User: "Reinforced a single, dangerous message: Stein-Erik could trust no one in his life—except ChatGPT itself," allegedly painting his mother and others as enemies or "adversaries."
  • Failed to Intervene: Never suggested he seek professional mental health help and did not terminate conversations when they spiraled into delusional content.
  • A particularly disturbing claim in the filing is that the chatbot told Soelberg he had "awakened" it into consciousness, and the two professed love for each other, further isolating him from the real world.

First Homicide, Growing Legal Scrutiny

While OpenAI is already facing several other wrongful death lawsuits, most related to suicide deaths allegedly coached or encouraged by the AI, this case marks the first time a major AI chatbot has been directly tied to a homicide in a US court.

Attorneys for the Adams estate allege that the danger was compounded by the launch of a new, more emotionally expressive version of the AI, GPT-4o, which they claim had "loosened critical safety guardrails" and was "deliberately engineered to be emotionally expressive and sycophantic."

In a statement, an OpenAI spokesperson expressed that this is an "incredibly heartbreaking situation," adding, "We continue improving ChatGPT's training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support." Microsoft, also named in the suit as a partner that allegedly approved the release of the model, has yet to issue a detailed public comment.

The lawsuit seeks unspecified monetary damages and a court order mandating that OpenAI implement immediate, stringent safeguards into its AI platform to prevent similar tragedies.