Steve Wozniak, Prince Harry and 800 others want a ban on AI ‘superintelligence’

Date:

Share:


More than 800 public figures including Steve Wozniak and Prince Harry, along with AI scientists, former military leaders and CEOs signed a statement demanding a ban on AI work that could lead to superintelligence, The Financial Times reported. “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in,” it reads.

The signers include a wide mix of people across sectors and political spectrums, including AI researcher and Nobel prize winner Geoffrey Hinton, former Trump aide Steve Bannon, one time Joint Chiefs of Staff Chairman Mike Mullen and rapper Will.i.am. The statement comes from the Future of Life Institute, which said that AI developments are occurring faster than the public can comprehend.

“We’ve, at some level, had this path chosen for us by the AI companies and founders and the economic system that’s driving them, but no one’s really asked almost anybody else, ‘Is this what we want?'” the institute’s executive director, Anthony Aguirre, told NBC News.

Artificial general intelligence (AGI) refers to the ability of machines to reason and perform tasks as well as a human can, while superintelligence would enable AI to do things better than even human experts. That potential ability has been cited by critics (and the culture in general) as a grave risk to humanity. So far, though, AI has proven itself to be useful only for a narrow range of tasks and consistently fails to handle complex tasks like self-driving.

Despite the lack of recent breakthroughs, companies like OpenAI are pouring billions into new AI models and the data centers needed to run them. Meta CEO Mark Zuckerberg recently said that superintelligence was “in sight,” while X CEO Elon Musk said superintelligence “is happening in real time” (Musk has also famously warned about the potential dangers of AI). OpenAI CEO Sam Altman said he expects superintelligence to happen by 2030 at the latest. None of those leaders, nor anyone notable from their companies, signed the statement.

It’s far from the only call for a slowdown in AI developement. Last month, more than 200 researchers and public officials, including 10 Nobel Prize winners and multiple artificial intelligence experts, released an urgent call for a “red line” against the risks of AI. However, that letter referred not to superintelligence, but dangers already starting to materialize like mass unemployment, climate change and human rights abuses. Other critics are sounding alarms around a potential AI bubble that could eventually pop and take the economy down with it.



Source link

━ more like this

The business model behind affiliate marketing in the iGaming sector – What founders should know – London Business News | Londonlovesbusiness.com

In the brutally competitive iGaming’s niche the fight for customers is literally a billion-dollar-war. Whilst traditional marketing like TV ads has its place,...

Putin begins ‘intimidating Trump and the world with nuclear weapons’ – London Business News | Londonlovesbusiness.com

Vladimir Putin has personally overseen a major nuclear exercise and number of nuclear capable intercontinental ballistic missiles were launched from various locations across...

Uber will pay drivers $4,000 to switch to an EV

Uber Green is rebranding to Uber Electric, and to coincide with the name switch — a move designed to make it clearer for...

Leak suggest 31/Atlas is no comet and has an ‘engine like sound’ – London Business News | Londonlovesbusiness.com

A whistleblower who says they are a Space Debris Engineer for the European Space Agency (ESA) has said that top secret materials is...

Toyota’s new all-hybrid RAV4 has software you might actually want to use

If I had a dollar for every time a vehicle manufacturer launched a new in-car software experience designed to achieve the same levels...
spot_img