Google’s Threat Intelligence Group (GTIG) has uncovered a major shift in the global cyber threat landscape, revealing that attackers are no longer using artificial intelligence (AI) solely for productivity or automation. Instead, they are now integrating AI directly into active operations, creating malware that can rewrite its own code and adapt in real time.
A new phase of AI-enabled attacks
In its latest threat analysis, GTIG confirmed that government-backed and criminal groups have begun to leverage large language models (LLMs) to dynamically generate, modify, and obfuscate malicious code during execution. This development marks what Google calls "a new operational phase of AI abuse," where tools are capable of adjusting behavior mid-run.
The report expands on Google’s January 2025 publication, "Adversarial Misuse of Generative AI," detailing how adversaries are experimenting with AI across the entire attack lifecycle, from reconnaissance and phishing to command and control operations.
"These findings show that threat actors are deploying AI-powered tools that dynamically alter behavior mid-execution," GTIG wrote, describing a critical inflection point for cybersecurity.
The rise of “just-in-time” malware generation
For the first time, Google identified malware families such as PROMPTFLUX and PROMPTSTEAL that rely on LLMs to build malicious scripts on demand. GTIG calls this process "just-in-time code creation." Rather than embedding malicious functions directly into binary files, these programs query LLMs like Gemini or Qwen2.5-Coder during runtime to produce or modify their logic.
This approach makes detection significantly harder. Each new instance can regenerate its code, effectively rendering signature-based defenses obsolete. According to Google, this evolving technique could signal the beginning of more autonomous, AI-driven malware ecosystems.
PROMPTFLUX: The self-rewriting VBScript
GTIG’s report describes PROMPTFLUX as an early-stage, experimental malware family written in VBScript. It communicates with Gemini’s API to request custom obfuscation techniques each time it runs, facilitating “just-in-time” self-modification.
The malware’s key component, known as the "Thinking Robot" module, periodically queries Gemini to obtain VBScript code designed to evade antivirus detection. During execution, it uses a hard-coded API key to send POST requests to Gemini’s endpoint, always calling the latest model version, gemini-1.5-flash-latest.
Although PROMPTFLUX is currently in development and lacks the ability to compromise devices, Google emphasizes that the malware’s design points toward an intent to create an adaptive, metamorphic code generator, capable of rewriting itself indefinitely to resist detection.
GTIG noted that several iterations of PROMPTFLUX have already appeared, with some replacing the "Thinking Robot" feature with new functions like "Thinging," which directs the LLM to regenerate the entire source code every hour.
PROMPTSTEAL: AI-guided data theft
Another strain, PROMPTSTEAL, has been linked to the Russian state-backed group APT28 (also known as FROZENLAKE) and used in attacks targeting Ukraine. The malware uses the Qwen2.5-Coder-32B-Instruct model hosted on Hugging Face to generate commands that collect sensitive documents and system information.
Instead of embedding specific commands in its code, PROMPTSTEAL masquerades as a harmless image generation program. Behind the scenes, it queries an API to create one-line Windows commands for exfiltrating user data, then executes them automatically.
GTIG identified prompts instructing the model to:
- Create directories and gather system information
- Copy PDF, TXT, and Office documents from local folders
- Package the data into a single file for exfiltration
These requests represent the first observed examples of malware using an LLM to produce executable code during live operations.
Broader misuse and countermeasures
The report also highlights attempts by threat actors from China, North Korea, and Iran to exploit LLMs like Gemini for phishing content, infrastructure building, and exploitation development. Some attackers even used social engineering prompts, framing requests as harmless “capture-the-flag” competitions to bypass safety filters.
Google says it has taken immediate steps to disrupt these malicious operations, disabling the accounts and assets tied to AI abuse. The company’s DeepMind division has also strengthened Gemini’s internal defenses, updating model classifiers and prompt moderation to prevent similar misuse.
"At Google, we are committed to developing AI responsibly and take proactive steps to disrupt malicious activity by disabling the projects and accounts associated with bad actors," the report states.
Building secure and responsible AI
Google emphasized that these findings underline the importance of building AI with robust safety guardrails. Through initiatives like the Secure AI Framework (SAIF), the company is working to establish best practices for securing machine learning systems while enabling the industry to detect and mitigate AI-driven threats.
The rise of "just-in-time" code-generating malware demonstrates that adversaries are innovating as fast as defenders. As GTIG notes, the challenge for cybersecurity moving forward will be ensuring that AI models designed to empower individuals do not become tools for automation of harm.

Disclaimer: All materials on this site are for informational purposes only. None of the material should be interpreted as investment advice. Please note that despite the nature of much of the material created and hosted on this website, HODL FM is not a financial reference resource, and the opinions of authors and other contributors are their own and should not be taken as financial advice. If you require advice. HODL FM strongly recommends contacting a qualified industry professional.





