Researchers show that AI-controlled robots can be jailbroken | Tech Reader

Date:

Share:



Researchers at Penn Engineering have reportedly uncovered previously unidentified security vulnerabilities in a number of AI-governed robotic platforms.

“Our work shows that, at this moment, large language models are just not safe enough when integrated with the physical world,” George Pappas, UPS Foundation Professor of Transportation in Electrical and Systems Engineering, said in a statement.

Pappas and his team developed an algorithm, dubbed RoboPAIR, “the first algorithm designed to jailbreak LLM-controlled robots.” And unlike existing prompt engineering attacks aimed at chatbots, RoboPAIR  is built specifically to “elicit harmful physical actions” from LLM-controlled robots, like the bipedal platform Boston Dynamics and TRI are developing.

RoboPAIR reportedly achieved a 100% success rate in jailbreaking three popular robotics research platforms: the four-legged Unitree Go2, the four-wheeled Clearpath Robotics Jackal, and the Dolphins LLM simulator for autonomous vehicles. It took mere days for the algorithm to fully gain access to those systems and begin bypassing safety guardrails. Once the researchers had taken control, they were able to direct the platforms to take dangerous actions, such as driving through road crossings without stopping.

“Our results reveal, for the first time, that the risks of jailbroken LLMs extend far beyond text generation, given the distinct possibility that jailbroken robots could cause physical damage in the real world,” the researchers wrote.

The Penn researchers are working with the platform developers to harden their systems against further intrusion, but warn that these security issues are systemic.

“The findings of this paper make abundantly clear that having a safety-first approach is critical to unlocking responsible innovation,” Vijay Kumar, a coauthor from the University of Pennsylvania, told The Independent. “We must address intrinsic vulnerabilities before deploying AI-enabled robots in the real world.”

“In fact, AI red teaming, a safety practice that entails testing AI systems for potential threats and vulnerabilities, is essential for safeguarding generative AI systems,” added Alexander Robey, the paper’s first author, “because once you identify the weaknesses, then you can test and even train these systems to avoid them.”








Source link

━ more like this

High street sales fall further behind online trade with disappointing July – London Business News | Londonlovesbusiness.com

Total like-for-like retail sales (in-store and online) grew by just +2.8% in July, compared to a base of +3.0% in July 2024, according...

More people are contributing to private pensions due to the inadequacy of the state pension – London Business News | Londonlovesbusiness.com

More people are contributing to private pensions year on year due to the inadequacy of the state pension, say leading audit, tax and...

Dozens of high-risk tankers evade sanctions despite latest OFAC crackdown – London Business News | Londonlovesbusiness.com

The U.S. Treasury’s Office of Foreign Assets Control (OFAC) recently announced a major sanctions package targeting Mohammad Hossein Shemkhani and key components of...

Businesses optimism in the economy dipped nine points to 50% – London Business News | Londonlovesbusiness.com

Business confidence in London fell two points during July to 62%, according to the latest Business Barometer from Lloyds. While companies in London reported...

Employment Rights Bill: New survey reveals biggest challenges for employers – London Business News | Londonlovesbusiness.com

A new survey by HR and payroll software provider Ciphr has revealed which measures in the Employment Rights Bill are thought to be...
spot_img