Steve Wozniak, Prince Harry and 800 others want a ban on AI ‘superintelligence’

Date:

Share:


More than 800 public figures including Steve Wozniak and Prince Harry, along with AI scientists, former military leaders and CEOs signed a statement demanding a ban on AI work that could lead to superintelligence, The Financial Times reported. “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in,” it reads.

The signers include a wide mix of people across sectors and political spectrums, including AI researcher and Nobel prize winner Geoffrey Hinton, former Trump aide Steve Bannon, one time Joint Chiefs of Staff Chairman Mike Mullen and rapper Will.i.am. The statement comes from the Future of Life Institute, which said that AI developments are occurring faster than the public can comprehend.

“We’ve, at some level, had this path chosen for us by the AI companies and founders and the economic system that’s driving them, but no one’s really asked almost anybody else, ‘Is this what we want?'” the institute’s executive director, Anthony Aguirre, told NBC News.

Artificial general intelligence (AGI) refers to the ability of machines to reason and perform tasks as well as a human can, while superintelligence would enable AI to do things better than even human experts. That potential ability has been cited by critics (and the culture in general) as a grave risk to humanity. So far, though, AI has proven itself to be useful only for a narrow range of tasks and consistently fails to handle complex tasks like self-driving.

Despite the lack of recent breakthroughs, companies like OpenAI are pouring billions into new AI models and the data centers needed to run them. Meta CEO Mark Zuckerberg recently said that superintelligence was “in sight,” while X CEO Elon Musk said superintelligence “is happening in real time” (Musk has also famously warned about the potential dangers of AI). OpenAI CEO Sam Altman said he expects superintelligence to happen by 2030 at the latest. None of those leaders, nor anyone notable from their companies, signed the statement.

It’s far from the only call for a slowdown in AI developement. Last month, more than 200 researchers and public officials, including 10 Nobel Prize winners and multiple artificial intelligence experts, released an urgent call for a “red line” against the risks of AI. However, that letter referred not to superintelligence, but dangers already starting to materialize like mass unemployment, climate change and human rights abuses. Other critics are sounding alarms around a potential AI bubble that could eventually pop and take the economy down with it.



Source link

━ more like this

Flipboard’s ‘social websites’ are a new spin on decentralized social media

Flipboard has been one of the biggest boosters of decentralized social media. Now, the company, which is known for its social news reading...

Anker Solix E10 Whole-Home Backup System Review:  Power That Actually Fits Your Life

Anker Solix E10 MSRP $5,799.00 “A modular approach to backup power that actually grows to fit how people live, not just how systems are spec’d.” Pros Highly expandable...

Steam survey shows Linux hitting an all-time high with gamers

Linux gaming has just hit a major milestone. Valve’s March 2026 Steam Hardware & Software Survey shows Linux at 5.33%, which is the...

Apple Arcade just got two indie gems

Two fantastic indie titles just dropped for Apple Arcade. The platform has received versions of Dredge and Unpacking, both of which have been...

Soundcore Nebula X1 Pro review: The king of party projectors

Every now and then, I test a gadget so wild that I can’t believe a company actually made it. Soundcore’s $5,000 Nebula X1...
spot_img