A chilling revelation has emerged: state-sponsored hackers are exploiting Google's Gemini AI model, wielding its power across every stage of their malicious campaigns. From initial reconnaissance to post-compromise maneuvers, these bad actors are leaving no stone unturned.
The threat landscape is diverse, with hackers from China, Iran, North Korea, and Russia all utilizing Gemini for their nefarious purposes. They profile targets, gather open-source intelligence, craft phishing lures, translate text, code, test vulnerabilities, and troubleshoot - a comprehensive toolkit for their illicit activities.
But here's where it gets controversial: cybercriminals are not just interested in AI; they're actively integrating it into their arsenals. Google's Threat Intelligence Group (GTIG) reports that APT adversaries are using Gemini from start to finish, from creating phishing lures to exfiltrating data.
Chinese threat actors, for instance, have employed Gemini to automate vulnerability analysis and provide tailored testing plans. In one instance, they fabricated a scenario, directing Gemini to analyze Remote Code Execution techniques and SQL injection test results against specific US targets.
Iranian adversary APT42 has leveraged Google's LLM for social engineering campaigns, using it as a development platform to rapidly create customized malicious tools. And this is the part most people miss: these actors are not just using AI; they're enhancing their existing malware with AI capabilities.
Take HonestCue, a proof-of-concept malware framework observed in 2025. It uses the Gemini API to generate C# code for second-stage malware, compiling and executing payloads in memory. CoinBait, a phishing kit disguised as a cryptocurrency exchange, also bears signs of AI code generation tools in its development.
The indicators of LLM use are intriguing: logging messages in the malware source code prefixed with "Analytics:" could provide defenders with a trail to track data exfiltration processes.
GTIG researchers believe that the malware was created using the Lovable AI platform, as evidenced by the developer's use of the Lovable Supabase client and lovable.app.
Cybercriminals are also employing generative AI services in ClickFix campaigns, delivering AMOS info-stealing malware for macOS. Users are lured into executing malicious commands through deceptive ads listed in search results for specific troubleshooting queries.
The report further highlights attempts to extract and distill the AI model, with organizations systematically querying the system to reproduce its decision-making processes. This constitutes a significant commercial, competitive, and intellectual property issue for the creators of these models.
"Model extraction and subsequent knowledge distillation enable an attacker to accelerate AI model development quickly and at a significantly lower cost," GTIG researchers explain.
Google flags these attacks as a threat because they represent intellectual theft, are scalable, and undermine the AI-as-a-service business model, potentially impacting end users soon.
In a large-scale attack, Gemini AI was targeted by 100,000 prompts, posing questions aimed at replicating the model's reasoning in non-English languages. Google has taken action, disabling accounts and infrastructure tied to documented abuse and implementing targeted defenses in Gemini's classifiers.
The company assures that it designs AI systems with robust security measures and strong safety guardrails, regularly testing the models to enhance their security and safety.
As we navigate this evolving IT landscape, the future of infrastructure security is a pressing concern. Modern IT moves faster than manual workflows can handle, and automation is key to staying ahead of these threats.
Check out this new Tines guide to learn how your team can reduce manual delays, improve reliability through automated response, and build intelligent workflows on top of existing tools. The future of IT infrastructure is here, and it's time to adapt and stay ahead of the curve.