The OpenAI team tasked with protecting humanity is no more

Date:

Share:


In the summer of 2023, OpenAI created a “Superalignment” team whose goal was to steer and control future AI systems that could be so powerful they could lead to human extinction. Less than a year later, that team is dead.

OpenAI told Bloomberg that the company was “integrating the group more deeply across its research efforts to help the company achieve its safety goals.” But a series of tweets from Jan Leike, one of the team’s leaders who recently quit revealed internal tensions between the safety team and the larger company.

In a statement posted on X on Friday, Leike said that the Superalignment team had been fighting for resources to get research done. “Building smarter-than-human machines is an inherently dangerous endeavor,” Leike wrote. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.” OpenAI did not immediately respond to a request for comment from Tech Reader.

X

Leike’s departure earlier this week came hours after OpenAI chief scientist Sutskevar announced that he was leaving the company. Sutskevar was not only one of the leads on the Superalignment team, but helped co-found the company as well. Sutskevar’s move came six months after he was involved in a decision to fire CEO Sam Altman over concerns that Altman hadn’t been “consistently candid” with the board. Altman’s all-too-brief ouster sparked an internal revolt within the company with nearly 800 employees signing a letter in which they threatened to quit if Altman wasn’t reinstated. Five days later, Altman was back as OpenAI’s CEO after Sutskevar had signed a letter stating that he regretted his actions.

When it announced the creation of the Superalignment team, OpenAI said that it would dedicate 20 percent of its computer power over the next four years to solving the problem of controlling powerful AI systems of the future. “[Getting] this right is critical to achieve our mission,” the company wrote at the time. On X, Leike wrote that the Superalignment team was “struggling for compute and it was getting harder and harder” to get crucial research around AI safety done. “Over the past few months my team has been sailing against the wind,” he wrote and added that he had reached “a breaking point” with OpenAI’s leadership over disagreements about the company’s core priorities.

Over the last few months, there have been more departures from the Superalignment team. In April, OpenAI reportedly fired two researchers, Leopold Aschenbrenner and Pavel Izmailov, for allegedly leaking information.

OpenAI told Bloomberg that its future safety efforts will be led by John Schulman, another co-founder, whose research focuses on large language models. Jakub Pachocki, a director who led the development of GPT-4 — one of OpenAI’s flagship large language models — would replace Sutskevar as chief scientist.

Superalignment wasn’t the only team at OpenAI focused on AI safety. In October, the company started a brand new “preparedness” team to stem potential “catastrophic risks” from AI systems including cybersecurity issues and chemical, nuclear and biological threats.

Update, May 17 2024, 3:28 PM ET: In response to a request for comment on Leike’s allegations, an OpenAI PR person directed Tech Reader to Sam Altman’s tweet saying that he’d say something in the next couple of days.

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.





Source link

━ more like this

Ascendion’s Human-First Approach to Exceptional Candidate Experiences – Insights Success

Never have the stakes been higher for organizations when it comes to attracting and recruiting top talent. As enterprises accelerate AI adoption, organizations...

It’s here! NASA reveals full livestream schedule for crewed moon mission

The excitement is building with NASA now just a few days away from sending four astronauts on a voyage around the moon. On Wednesday,...

Android 17’s new Contact Picker stops apps from accessing your entire contact list

Android 17 is getting a new Contact Picker that changes how apps access your contacts list. Earlier reports hinted at this shift toward...

Reddit may ask you to prove you’re human as it cracks down on bot accounts

Reddit is stepping up its fight against bots, and now your account could be asked to prove it is human if the platform...

PSA: T-Mobile customers have a week to sign up for a free year of MLB.TV

Today marks the start of the 2026 baseball season and in what has sort of become an annual tradition, T-Mobile is once again...
spot_img