Google says hackers are abusing Gemini to speed up cyberattacks, and it isn’t limited to cheesy phishing spam. In a new Google Threat Intelligence Group report, it says state-backed groups have used Gemini across multiple phases of an operation, from early target research to post-compromise work.
The activity spans clusters linked to China, Iran, North Korea, and Russia. Google says the prompts and outputs it observed covered profiling, social engineering copy, translation, coding help, vulnerability testing, and debugging when tools break during an intrusion. Fast help on routine tasks can still change the outcome.
AI help, same old playbook
Google’s researchers frame the use of AI as acceleration, not magic. Attackers already run recon, draft lures, tweak malware, and chase down errors. Gemini can tighten that loop, especially when operators need quick rewrites, language support, or code fixes under pressure.
The report describes Chinese-linked activity where an operator adopted an expert cybersecurity persona and pushed Gemini to automate vulnerability analysis and produce targeted test plans in a made-up scenario. Google also says a China-based actor repeatedly used Gemini for debugging, research, and technical guidance tied to intrusions. It’s less about new tactics, more about fewer speed bumps.
The risk isn’t just phishing
The big shift is tempo. If groups can iterate faster on targeting and tooling, defenders get less time between early signals and real damage. That also means fewer obvious pauses where mistakes, delays, or repeated manual work might surface in logs.
Google also flags a different threat that doesn’t look like classic scams at all, model extraction and knowledge distillation. In that scenario, actors with authorized API access hammer the system with prompts to replicate how it performs and reasons, then use that knowledge to train another model. Google frames it as commercial and intellectual property harm, with potential downstream risk if it scales, including one example involving 100,000 prompts aimed at replicating behavior in non-English tasks.
What you should watch next
Google says it has disabled accounts and infrastructure tied to documented Gemini abuse, and it has added targeted defenses in Gemini’s classifiers. It also says it continues testing and relies on safety guardrails.
For security teams, the practical takeaway is to assume AI-assisted attacks will move quicker, not necessarily smarter. Track sudden improvements in lure quality, faster tooling iteration, and unusual API usage patterns, then tighten response runbooks so speed doesn’t become the attacker’s biggest advantage.
