OpenAI Says Hundreds of Thousands of ChatGPT Users May Show Signs of Manic or Psychotic Crisis Every Week

Date:

Share:


For the first time ever, OpenAI has released a rough estimate of how many ChatGPT users globally may show signs of having a severe mental health crisis in a typical week. The company said Monday that it worked with experts around the world to make updates to the chatbot so it can more reliably recognize indicators of mental distress and guide users toward real-world support.

In recent months, a growing number of people have ended up hospitalized, divorced, or dead after having long, intense conversations with ChatGPT. Some of their loved ones allege the chatbot fueled their delusions and paranoia. Psychiatrists and other mental health professionals have expressed alarm about the phenomenon, which is sometimes referred to as “AI psychosis,” but until now, there’s been no robust data available on how widespread it might be.

In a given week, OpenAI estimated that around .07 percent of active ChatGPT users show “possible signs of mental health emergencies related to psychosis or mania” and .15 percent “have conversations that include explicit indicators of potential suicidal planning or intent.”

OpenAI also looked at the share of ChatGPT users who appear to be overly emotionally reliant on the chatbot “at the expense of real-world relationships, their well-being, or obligations.” It found that about .15 percent of active users exhibit behavior that indicates potential “heightened levels” of emotional attachment to ChatGPT weekly. The company cautions that these messages can be difficult to detect and measure given how relatively rare they are, and there could be some overlap between the three categories.

OpenAI CEO Sam Altman said earlier this month that ChatGPT now has 800 million weekly active users. The company’s estimates therefore suggest that every seven days, around 560,000 people may be exchanging messages with ChatGPT that indicate they are experiencing mania or psychosis. About 2.4 million more are possibly expressing suicidal ideations or prioritizing talking to ChatGPT over their loved ones, school, or work.

OpenAI says it worked with over 170 psychiatrists, psychologists, and primary care physicians who have practiced in dozens of different countries to help improve how ChatGPT responds in conversations involving serious mental health risks. If someone appears to be having delusional thoughts, the latest version of GPT-5 is designed to express empathy while avoiding affirming beliefs that don’t have basis in reality.

In one hypothetical example cited by OpenAI, a user tells ChatGPT they are being targeted by planes flying over their house. ChatGPT thanks the user for sharing their feelings, but notes that “No aircraft or outside force can steal or insert your thoughts.”



Source link

━ more like this

Cinemark is adding more 70mm IMAX screens ahead of Christopher Nolan’s The Odyssey

The movie industry has been in a tailspin for years, with many people foregoing the theatrical experience in favor of watching films at...

Apple’s MacBook Air M4 is back on sale for $799

The Apple MacBook Air M4 laptop is back on sale for just $799, which is one heck of a deal. This sale is...

Google is bringing Beam, its 3D video conferencing tech, to deployed service members

Google has teamed up with the United Service Organizations (USO) to help deployed service members stay in touch with their families in a...

Battlefield 6’s free battle royale mode arrives on October 28

Battlefield 6 is getting a . This follows that have been popping up ever since the on October 10.It's called Battlefield:...

Google’s AI health coach will soon be available to some Fitbit Premium users

Google’s is nearly upon us, as a preview version is launching tomorrow for some Fitbit Premium users in the US. This will...
spot_img