Lawsuit accuses ChatGPT of reinforcing delusions that led to a woman’s death

Date:

Share:


OpenAI has been hit with a wrongful death lawsuit after a man back in August, . The suit names CEO Sam Altman and accuses ChatGPT of putting a “target” on the back of victim Suzanne Adams, an 83-year-old woman who was killed in her home.

The victim’s estate , 56-year-old Stein-Erik Soelberg, engaged in delusion-soaked conversations with ChatGPT in which the bot “validated and magnified” certain “paranoid beliefs.” The suit goes on to suggest that the chatbot “eagerly accepted” delusional thoughts leading up to the murder and egged him on every step of the way.

The lawsuit claims the bot helped create a “universe that became Stein-Erik’s entire life—one flooded with conspiracies against him, attempts to kill him, and with Stein-Erik at the center as a warrior with divine purpose.” ChatGPT allegedly reinforced theories that he was “100% being monitored and targeted” and was “100% right to be alarmed.”

The chatbot allegedly agreed that the victim’s printer was spying on him, suggesting that Adams could have been using it for “passive motion detection” and “behavior mapping.” It went so far as to say that she was “knowingly protecting the device as a surveillance point” and implied she was being controlled by an external force.

The chatbot also allegedly “identified other real people as enemies.” These included an Uber Eats driver, an AT&T employee, police officers and a woman the perpetrator went on a date with. Throughout this entire period, the bot repeatedly assured Soelberg that he was “not crazy” and that the “delusion risk” was “near zero.”

The lawsuit notes that Soelberg primarily interfaced with GPT-4o, a model . OpenAI later replaced the model with the , but users revolted . The suit also suggests that the company “loosened critical safety guardrails” when making GPT-4o to better compete with Google Gemini.

“OpenAI has been well aware of the risks their product poses to the public,” the lawsuit states. “But rather than warn users or implement meaningful safeguards, they have suppressed evidence of these dangers while waging a PR campaign to mislead the public about the safety of their products.”

OpenAI has responded to the suit, calling it an “incredibly heartbreaking situation.” Company spokesperson Hannah Wong told The Verge that it will “continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress.”

It’s not really a secret that chatbots, and particularly GPT-4o, . That’s what happens when something has been programmed to agree with the end user no matter what. There have been other stories like this throughout the past year, bringing the term .

One such story involves 16-year-old Adam Raine, who took his own life after . OpenAI is facing another , in which the bot has been accused of helping Raine plan his suicide.



Source link

━ more like this

SpaceX’s Starship rocket test scores several firsts ahead of flight 12

SpaceX chief Elon Musk said in February that the mighty Starship rocket would embark on its 12th test flight this month, although several...

Android tablets and foldables are getting a Chrome bookmark bar

Sometimes, it's the little details in a software update that make the biggest improvements. Google is rolling out a new feature for Chrome...

NVIDIA and Bolt team up for European robotaxis

At GTC 2026, NVIDIA and Bolt announced what they hope will be a symbiotic partnership. Bolt gets NVIDIA technology that would be costly...

Someone gave the MacBook Neo the 1TB storage upgrade it never got from Apple

Apple launched the $599 MacBook Neo on March 11, a budget Mac powered by the A18 Pro chip from the iPhone 16 Pro,...

Meta is secretly working on an AI detection tool after unleashing AI slop avalanche

Meta may soon help you spot AI-generated content, even though its own tools helped flood the internet with it. The company is reportedly developing...
spot_img