Hackers are using Gemini to target you, Google says

Date:

Share:



Google says hackers are abusing Gemini to speed up cyberattacks, and it isn’t limited to cheesy phishing spam. In a new Google Threat Intelligence Group report, it says state-backed groups have used Gemini across multiple phases of an operation, from early target research to post-compromise work.

The activity spans clusters linked to China, Iran, North Korea, and Russia. Google says the prompts and outputs it observed covered profiling, social engineering copy, translation, coding help, vulnerability testing, and debugging when tools break during an intrusion. Fast help on routine tasks can still change the outcome.

AI help, same old playbook

Google’s researchers frame the use of AI as acceleration, not magic. Attackers already run recon, draft lures, tweak malware, and chase down errors. Gemini can tighten that loop, especially when operators need quick rewrites, language support, or code fixes under pressure.

The report describes Chinese-linked activity where an operator adopted an expert cybersecurity persona and pushed Gemini to automate vulnerability analysis and produce targeted test plans in a made-up scenario. Google also says a China-based actor repeatedly used Gemini for debugging, research, and technical guidance tied to intrusions. It’s less about new tactics, more about fewer speed bumps.

The risk isn’t just phishing

The big shift is tempo. If groups can iterate faster on targeting and tooling, defenders get less time between early signals and real damage. That also means fewer obvious pauses where mistakes, delays, or repeated manual work might surface in logs.

Google also flags a different threat that doesn’t look like classic scams at all, model extraction and knowledge distillation. In that scenario, actors with authorized API access hammer the system with prompts to replicate how it performs and reasons, then use that knowledge to train another model. Google frames it as commercial and intellectual property harm, with potential downstream risk if it scales, including one example involving 100,000 prompts aimed at replicating behavior in non-English tasks.

What you should watch next

Google says it has disabled accounts and infrastructure tied to documented Gemini abuse, and it has added targeted defenses in Gemini’s classifiers. It also says it continues testing and relies on safety guardrails.

For security teams, the practical takeaway is to assume AI-assisted attacks will move quicker, not necessarily smarter. Track sudden improvements in lure quality, faster tooling iteration, and unusual API usage patterns, then tighten response runbooks so speed doesn’t become the attacker’s biggest advantage.



Source link

━ more like this

Trump warns Tehran bases at Diego Garcia and Fairford ready for action – London Business News | Londonlovesbusiness.com

The United States is reportedly preparing to launch military strikes on Iran within days, as President Donald Trump ramps up pressure following failed...

Grab this Elevation Lab 10-year extended battery case for AirTag for only $16

If you're an iPhone user who likes to keep tabs on where your stuff is, you can't go far wrong with an AirTag....

A complete guide to managing your rental property successfully – London Business News | Londonlovesbusiness.com

Managing a rental property can be both financially rewarding and operationally demanding. While many landlords enter the market with the goal of generating...

Google Maps tests hiding reviews and images unless you sign in

Google Maps has quietly begun treating signed-out users differently. It gives them the directions but hides all the other useful information, including photos,...

Enterprise-grade video production: Why Seedance 2.0 is the game-changer – London Business News | Londonlovesbusiness.com

Enterprise organisations face a unique set of challenges when it comes to video content production. Large corporations need to produce video content at...
spot_img