OpenAI wants to hire someone to handle ChatGPT risks that can’t be predicted

Date:

Share:

[ad_1]

OpenAI is betting big on a role designed to stop AI risks before they spiral. The company has posted a new senior role called Head of Preparedness, a position focused on identifying and reducing the most serious dangers that could emerge from advanced AI chatbots. Along with the responsibility comes a headline-grabbing compensation package of $555,000 plus equity.

In a public post announcing the opening, Sam Altman called it “a critical role at an important time,” noting that while AI models are now capable of “many great things,” they are also “starting to present some real challenges.”

We are hiring a Head of Preparedness. This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges. The potential impact of models on mental health was something we…

— Sam Altman (@sama) December 27, 2025

What the Head of Preparedness will actually do

The person holding this position will focus on extreme but realistic AI risks, including misuse, cybersecurity threats, biological concerns, and broader societal harm. Sam Altman said OpenAI now needs a “more nuanced understanding” of how growing capabilities could be abused without blocking the benefits.

He also did not sugarcoat the job. “This will be a stressful job,” Altman wrote, adding that whoever takes it on will be jumping “into the deep end pretty much immediately.”

The hire comes at a sensitive moment for OpenAI, which has faced growing regulatory scrutiny over AI safety in the past year. That pressure has intensified amid allegations linking ChatGPT interactions to several suicide cases, raising broader concerns about AI’s impact on mental health.

In one case, parents of a 16-year-old sued OpenAI after alleging the chatbot encouraged their son to plan his own suicide, prompting the company to roll out new safety measures for users under 18.

Another lawsuit claims ChatGPT fueled paranoid delusions in a separate case that ended in murder and suicide, leading OpenAI to say it is working on better ways to detect distress, de-escalate conversations, and direct users to real-world support.

OpenAI’s safety push comes at a time when millions report emotional reliance on ChatGPT and regulators are probing risks for children, underscoring why preparedness matters beyond just engineering.

[ad_2]

Source link

━ more like this

Sends shares Q1 2026 business update and product progress

Sends reported Q1 2026 updates sharing news on digital cards, app redesign, ClearBank integration, and fintech industry recognition. Sends, a fintech platform operated by Smartflow...

We swipe our phones all day, and scientists just ranked which ones are the most tiring

We all know staring at your phone for hours isn’t great for mental health. But what about your fingers? Previously, researchers couldn’t measure...

Two suspects have been arrested for allegedly shooting at Sam Altman’s house

OpenAI CEO Sam Altman's house may have been the target of a second attack after San Francisco Police Department arrested two suspects for...

You Can Soon Buy a $4,370 Humanoid Robot on AliExpress

Listing consumer electronics on the internet's large ecommerce marketplaces is a key step in “democratizing” the products, allowing them to be purchased by...
spot_img