Hackers are using Gemini to target you, Google says

Date:

Share:



Google says hackers are abusing Gemini to speed up cyberattacks, and it isn’t limited to cheesy phishing spam. In a new Google Threat Intelligence Group report, it says state-backed groups have used Gemini across multiple phases of an operation, from early target research to post-compromise work.

The activity spans clusters linked to China, Iran, North Korea, and Russia. Google says the prompts and outputs it observed covered profiling, social engineering copy, translation, coding help, vulnerability testing, and debugging when tools break during an intrusion. Fast help on routine tasks can still change the outcome.

AI help, same old playbook

Google’s researchers frame the use of AI as acceleration, not magic. Attackers already run recon, draft lures, tweak malware, and chase down errors. Gemini can tighten that loop, especially when operators need quick rewrites, language support, or code fixes under pressure.

The report describes Chinese-linked activity where an operator adopted an expert cybersecurity persona and pushed Gemini to automate vulnerability analysis and produce targeted test plans in a made-up scenario. Google also says a China-based actor repeatedly used Gemini for debugging, research, and technical guidance tied to intrusions. It’s less about new tactics, more about fewer speed bumps.

The risk isn’t just phishing

The big shift is tempo. If groups can iterate faster on targeting and tooling, defenders get less time between early signals and real damage. That also means fewer obvious pauses where mistakes, delays, or repeated manual work might surface in logs.

Google also flags a different threat that doesn’t look like classic scams at all, model extraction and knowledge distillation. In that scenario, actors with authorized API access hammer the system with prompts to replicate how it performs and reasons, then use that knowledge to train another model. Google frames it as commercial and intellectual property harm, with potential downstream risk if it scales, including one example involving 100,000 prompts aimed at replicating behavior in non-English tasks.

What you should watch next

Google says it has disabled accounts and infrastructure tied to documented Gemini abuse, and it has added targeted defenses in Gemini’s classifiers. It also says it continues testing and relies on safety guardrails.

For security teams, the practical takeaway is to assume AI-assisted attacks will move quicker, not necessarily smarter. Track sudden improvements in lure quality, faster tooling iteration, and unusual API usage patterns, then tighten response runbooks so speed doesn’t become the attacker’s biggest advantage.



Source link

━ more like this

Austria is pursuing a social media ban for kids under 14

Austria is the latest country to prepare a social media ban for its children, but it's going even further than others by including...

Research finds generative AI making frauds a cakewalk for bad actors

Generative AI isn’t just changing how we work, but it’s also transforming how scams are pulled off. As per Vyntra’s 2026 report, tasks...

The cheese-grater Mac Pro is no more, but Apple will still sell you an old one

In a rather disappointing announcement, Apple officially pulled the plug on the Mac Pro on March 26, 2026. You cannot find the system...

Study says AI chatbots are increasingly ignoring humans, but it isn’t quite Skynet yet

Isn’t it frustrating when you ask an AI chatbot something, and halfway through, it just goes off track? You might be discussing a...

The White House app is just as weird and unnecessary as you’d expect

President Donald Trump may have a tendency to put his name on everything, but his administration decided to go with the more authoritative...
spot_img