Hackers tricked ChatGPT, Grok and Google into helping them install malware

Date:

Share:


Ever since reporting earlier this year on how easy it is to trick an agentic browser, I’ve been following the intersections between modern AI and old-school scams. Now, there’s a new convergence on the horizon: hackers are apparently using AI prompts to seed Google search results with dangerous commands. When executed by unknowing users, these commands prompt computers to give the hackers the access they need to install malware.

The warning comes by way of a recent report from detection-and-response firm Huntress. Here’s how it works. First, the threat actor has a conversation with an AI assistant about a common search term, during which they prompt the AI to suggest pasting a certain command into a computer’s terminal. They make the chat publicly visible and pay to boost it on Google. From then on, whenever someone searches for the term, the malicious instructions will show up high on the first page of results.

Huntress ran tests on both ChatGPT and Grok after discovering that a Mac-targeting data exfiltration attack called AMOS had originated from a simple Google search. The user of the infected device had searched “clear disk space on Mac,” clicked a sponsored ChatGPT link and — lacking the training to see that the advice was hostile — executed the command. This let the attackers install the AMOS malware. The testers discovered that both chatbots replicated the attack vector.

As Huntress points out, the evil genius of this attack is that it bypasses almost all the traditional red flags we’ve been taught to look for. The victim doesn’t have to download a file, install a suspicious executable or even click a shady link. The only things they have to trust are Google and ChatGPT, which they’ve either used before or heard about nonstop for the last several years. They’re primed to trust what those sources tell them. Even worse, while the link to the ChatGPT conversation has since been taken off Google, it was up for at least half a day after Huntress published their blog post.

This news comes at a time that’s already fraught for both AIs. Grok has been getting dunked on for sucking up to Elon Musk in despicable ways, while ChatGPT creator OpenAI has been falling behind the competition. It’s not yet clear if the attack can be replicated with other chatbots, but for now, I strongly recommend using caution. Alongside your other common-sense cybersecurity steps, make sure to never paste anything into your command terminal or your browser URL bar if you aren’t certain of what it will do.



Source link

━ more like this

This utterly cute Chinese EV costs just $6,200 and pushes over 190 miles

China has never had trouble making things affordable, and electric vehicles are no exception. Still, $6,200 for a feature-packed EV with 190 miles...

Bethesda is shutting down The Elder Scrolls: Blades on June 30

It's a sad day for the dozens of players still grinding The Elder Scrolls: Blades. Bethesda announced that it's permanently shutting down the...

An AI agent tracked Guinness prices across Irish pubs — now, I want one for coffee and ramen

There’s something oddly brilliant about outsourcing your curiosity to an AI that doesn’t get tired or awkward. After all, if an AI agent...

The Avatar fighting game will release on July 2 for PC and consoles

The fighting game community is going to have their hands full this summer between the release of Marvel Tōkon: Fighting Souls and Avatar...

Android is changing the rules for sideloading, but they won’t hinder your phone upgrade

Starting August 2026, Google is tightening the screws on sideloading. If you want to install apps from unverified sources, you will have to...
spot_img