Researchers cause GitLab AI developer assistant to turn safe code malicious

Date:

Share:



Marketers promote AI-assisted developer tools as workhorses that are essential for today’s software engineer. Developer platform GitLab, for instance, claims its Duo chatbot can “instantly generate a to-do list” that eliminates the burden of “wading through weeks of commits.” What these companies don’t say is that these tools are, by temperament if not default, easily tricked by malicious actors into performing hostile actions against their users.

Researchers from security firm Legit on Thursday demonstrated an attack that induced Duo into inserting malicious code into a script it had been instructed to write. The attack could also leak private code and confidential issue data, such as zero-day vulnerability details. All that’s required is for the user to instruct the chatbot to interact with a merge request or similar content from an outside source.

AI assistants’ double-edged blade

The mechanism for triggering the attacks is, of course, prompt injections. Among the most common forms of chatbot exploits, prompt injections are embedded into content a chatbot is asked to work with, such as an email to be answered, a calendar to consult, or a webpage to summarize. Large language model-based assistants are so eager to follow instructions that they’ll take orders from just about anywhere, including sources that can be controlled by malicious actors.

The attacks targeting Duo came from various resources that are commonly used by developers. Examples include merge requests, commits, bug descriptions and comments, and source code. The researchers demonstrated how instructions embedded inside these sources can lead Duo astray.

“This vulnerability highlights the double-edged nature of AI assistants like GitLab Duo: when deeply integrated into development workflows, they inherit not just context—but risk,” Legit researcher Omer Mayraz wrote. “By embedding hidden instructions in seemingly harmless project content, we were able to manipulate Duo’s behavior, exfiltrate private source code, and demonstrate how AI responses can be leveraged for unintended and harmful outcomes.”



Source link

━ more like this

T-Mobile 5G Home Internet’s latest deal gives you up to $300 back 

If you’ve been considering a switch from traditional cable, T-Mobile 5G Home Internet’s newest promotion may be the most compelling reason yet to make the move. The...

Rad Power Bikes gets a new owner, pledge to build bikes in the US

Life EV has completed a court-approved acquisition of Rad Power Bikes, granting a second life to the troubled e-bike brand.The Florida-based Life EV...

Microsoft Copilot just made browser switching a thing of the past

If you have ever been mid-conversation with Copilot, clicked a link, and then spent the next few minutes trying to find your way...

Tech Reader Podcast: Is the MacBook Neo the one?

It's been a wild week for Apple. After announcing a slew of new hardware, the company capped things off with its cheapest laptop...

Roku makes discovering your next favorite show fun with a new interactive experience

Roku is rolling out a new interactive experience called Roklue that turns aimless channel browsing into a pop culture game designed to help...
spot_img