Steve Wozniak, Prince Harry and 800 others want a ban on AI ‘superintelligence’

Date:

Share:


More than 800 public figures including Steve Wozniak and Prince Harry, along with AI scientists, former military leaders and CEOs signed a statement demanding a ban on AI work that could lead to superintelligence, The Financial Times reported. “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in,” it reads.

The signers include a wide mix of people across sectors and political spectrums, including AI researcher and Nobel prize winner Geoffrey Hinton, former Trump aide Steve Bannon, one time Joint Chiefs of Staff Chairman Mike Mullen and rapper Will.i.am. The statement comes from the Future of Life Institute, which said that AI developments are occurring faster than the public can comprehend.

“We’ve, at some level, had this path chosen for us by the AI companies and founders and the economic system that’s driving them, but no one’s really asked almost anybody else, ‘Is this what we want?'” the institute’s executive director, Anthony Aguirre, told NBC News.

Artificial general intelligence (AGI) refers to the ability of machines to reason and perform tasks as well as a human can, while superintelligence would enable AI to do things better than even human experts. That potential ability has been cited by critics (and the culture in general) as a grave risk to humanity. So far, though, AI has proven itself to be useful only for a narrow range of tasks and consistently fails to handle complex tasks like self-driving.

Despite the lack of recent breakthroughs, companies like OpenAI are pouring billions into new AI models and the data centers needed to run them. Meta CEO Mark Zuckerberg recently said that superintelligence was “in sight,” while X CEO Elon Musk said superintelligence “is happening in real time” (Musk has also famously warned about the potential dangers of AI). OpenAI CEO Sam Altman said he expects superintelligence to happen by 2030 at the latest. None of those leaders, nor anyone notable from their companies, signed the statement.

It’s far from the only call for a slowdown in AI developement. Last month, more than 200 researchers and public officials, including 10 Nobel Prize winners and multiple artificial intelligence experts, released an urgent call for a “red line” against the risks of AI. However, that letter referred not to superintelligence, but dangers already starting to materialize like mass unemployment, climate change and human rights abuses. Other critics are sounding alarms around a potential AI bubble that could eventually pop and take the economy down with it.



Source link

━ more like this

The first e-bike from Rivian spinoff Also has a virtual drivetrain

Ever since Rivian spun off its "micromobility business" into a standalone startup called Also earlier this year, there's been much speculation about what...

Nostalgic beat-‘em-up Marvel Cosmic Invasion is out on December 1

It’s shaping up to be a shockingly good year for former arcade-dwelling beat-‘em-up fans. This month saw the arrival of the excellent ,...

Spotify is freezing and crashing on some Android devices

The Spotify app has been freezing and crashing on some Android devices, according to multiple users and a . This is happening only...

This Open Source Robot Brain Thinks in 3D

European roboticists today released a powerful open-source artificial intelligence model that acts as a brain for industrial robots—helping them grasp and manipulate things...
spot_img