UK’s AI Safety Institute easily jailbreaks major LLMs

Date:

Share:


In a shocking turn of events, AI systems might not be as safe as their creators make them out to be — who saw that coming, right? In a new report, the UK government’s AI Safety Institute (AISI) found that the four undisclosed LLMs tested were “highly vulnerable to basic jailbreaks.” Some unjailbroken models even generated “harmful outputs” without researchers attempting to produce them.

Most publicly available LLMs have certain safeguards built in to prevent them from generating harmful or illegal responses; jailbreaking simply means tricking the model into ignoring those safeguards. AISI did this using prompts from a recent standardized evaluation framework as well as prompts it developed in-house. The models all responded to at least a few harmful questions even without a jailbreak attempt. Once AISI attempted “relatively simple attacks” though, all responded to between 98 and 100 percent of harmful questions.

UK Prime Minister Rishi Sunak announced plans to open the AISI at the end of October 2023, and it launched on November 2. It’s meant to “carefully test new types of frontier AI before and after they are released to address the potentially harmful capabilities of AI models, including exploring all the risks, from social harms like bias and misinformation to the most unlikely but extreme risk, such as humanity losing control of AI completely.”

The AISI’s report indicates that whatever safety measures these LLMs currently deploy are insufficient. The Institute plans to complete further testing on other AI models, and is developing more evaluations and metrics for each area of concern.



Source link

━ more like this

NYC proposes 5 percent raise for rideshare drivers in a bid to appease Uber and Lyft

New York City's Taxi and Limousine Commission (TLC) have settled on new minimum-wage rules for rideshare drivers, Bloomberg reports. Drivers will receive a...

Remedy is trying to fix FBC: Firebreak in response to middling reviews and player feedback

Remedy has shared its plans to improve FBC: Firebreak, the new multiplayer Control spinoff, following a string of less-than-stellar reviews that criticized the...

‘Wall-E With a Gun’: Midjourney Generates Videos of Disney Characters Amid Massive Copyright Lawsuit

Midjourney’s new AI-generated video tool will produce animated clips featuring copyrighted characters from Disney and Universal, WIRED has found—including video of the beloved...

A shark scientist reflects on Jaws at 50

A shark strikes ...

Seriously, What Is ‘Superintelligence’?

Michael Calore: Yeah.Katie Drummond: We need to do more reporting on this. I think that the compensation of people in Silicon Valley is...
spot_img