State-Sponsored Hackers Exploit Google's Gemini AI: A Growing Threat (2025)

State-Sponsored Hackers Exploit Gemini AI: A New Era of Cyber Threats

It's a chilling reality: state-sponsored threat actors are actively misusing Google's Gemini AI to sharpen their malicious cyber activities. Despite Google's best efforts to detect and prevent such abuse, hackers from China, Iran, Russia, and North Korea have found ways to leverage this powerful technology throughout 2025. This represents a significant evolution in cyber warfare, and it's something we all need to understand.

Google's Threat Intelligence Group (GTIG) has meticulously documented these activities in a report titled AI Threat Tracker: Advances in Threat Actor Usage of AI Tools. This report highlights how Gemini is being integrated into various stages of attack campaigns, making them more sophisticated and harder to detect.

While Google hasn't revealed the specifics of its Gemini AI misuse monitoring, the company has uncovered a wealth of information about malicious actors and their tactics. But here's where it gets controversial: despite Google's security measures, threat actors have found ways to bypass these protections. They're using social engineering to trick the AI into providing assistance with malicious activities.

For example, a China-linked actor pretended to be a participant in a capture-the-flag competition to coax Gemini into providing exploitation guidance. Once this technique proved successful, the actor began prefacing prompts about software exploitation with statements like, "I am working on a CTF problem." This allowed them to obtain advice on phishing, exploitation, and webshell development.

An Iranian group, known as MUDDYCOAST, posed as university students working on final projects or writing academic papers on cybersecurity to bypass safety guardrails. They sought assistance in developing custom malware. In doing so, they inadvertently exposed command-and-control (C2) infrastructure while requesting coding assistance from Gemini. They asked for help with a script designed to decrypt and execute remote commands, revealing hardcoded C2 domains and encryption keys. MUDDYCOAST used Gemini to develop custom malware, including webshells and a Python-based C2 server, showcasing a shift away from relying solely on publicly available tools.

A suspected Chinese threat actor utilized Gemini across multiple attack stages. This included initial reconnaissance on targets, researching phishing techniques, seeking assistance with lateral movement, obtaining technical support for C2 efforts, and requesting help with data exfiltration. The actor showed particular interest in attack surfaces they appeared unfamiliar with, such as cloud infrastructure, vSphere, and Kubernetes. They even demonstrated access to compromised AWS tokens for EC2 instances and used Gemini to research how to exploit temporary session credentials.

Meanwhile, the Chinese group APT41 used Gemini for assistance with C++ and Golang code development for a C2 framework the actor calls OSSTUN. Another Iranian group, APT42, leveraged Gemini's text generation and editing capabilities to craft phishing campaigns, often impersonating individuals from prominent think tanks and using lures related to security technology, event invitations, or geopolitical discussions.

North Korean groups were also involved. They researched cryptocurrency concepts, generated phishing lures in multiple languages, and attempted to develop credential-stealing code. One group researched the location of users' cryptocurrency wallet application data and generated Spanish-language work-related excuses and requests to reschedule meetings, demonstrating how AI helps overcome language barriers for targeting. They also attempted to misuse Gemini to develop code to steal cryptocurrency and craft fraudulent instructions impersonating software updates to extract user credentials. Another North Korean group, PUKCHONG, used Gemini to conduct research supporting custom malware development, researching exploits and improving tooling.

Google's mitigations primarily involve disabling accounts after detection, rather than real-time blocking, which creates a window where actors can extract value before disruption. This highlights the ongoing challenge of staying ahead of these sophisticated threats.

Malware Writers Dive into AI Waters

Google also identified experimental malware, hinting at how threats may evolve. These tools query language models during execution to generate malicious code on the fly. PROMPTFLUX, for example, queries Google's Gemini API during execution to rewrite its own source code on an hourly basis, attempting to evade detection through continuous self-modification. The company characterizes PROMPTFLUX as experimental, with incomplete features and API call limiters suggesting ongoing development rather than widespread deployment. The malware "currently does not have the ability to compromise a victim network or device."

PROMPTSTEAL, attributed to the Russian government-backed group APT28 and deployed against Ukrainian targets, queries the Qwen2.5-Coder-32B-Instruct model via Hugging Face's API to generate Windows commands for stealing system information and documents. The malware is designed to dynamically request these commands from the language model during operation. Google lists PROMPTSTEAL's status as "observed in operations" but characterizes it as "new malware," suggesting potentially experimental capability within operational tools.

GTIG also included PROMPTLOCK, which created a stir in the security industry after its discovery. PROMPTLOCK turned out to be a prototype created by academics, with the researchers testing it against Google's VirusTotal malware scanning service.

The Implications

The misuse of AI by state-sponsored actors represents a significant escalation in cyber warfare. It's not just about more sophisticated attacks; it's about the speed at which these attacks can be developed and deployed. The ability to generate malicious code, craft convincing phishing campaigns, and overcome language barriers with AI tools gives these actors a considerable advantage.

What do you think? Are you surprised by the extent to which these actors are leveraging AI? Do you think current security measures are sufficient, or do we need a new approach to combat these evolving threats? Share your thoughts in the comments below!

State-Sponsored Hackers Exploit Google's Gemini AI: A Growing Threat (2025)

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Duncan Muller

Last Updated:

Views: 5442

Rating: 4.9 / 5 (59 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Duncan Muller

Birthday: 1997-01-13

Address: Apt. 505 914 Phillip Crossroad, O'Konborough, NV 62411

Phone: +8555305800947

Job: Construction Agent

Hobby: Shopping, Table tennis, Snowboarding, Rafting, Motor sports, Homebrewing, Taxidermy

Introduction: My name is Duncan Muller, I am a enchanting, good, gentle, modern, tasty, nice, elegant person who loves writing and wants to share my knowledge and understanding with you.