Google’s Threat Intelligence Group (GTIG) has identified an experimental malware family known as PROMPTFLUX — a strain that doesn’t just execute malicious code, but rewrites itself using artificial intelligence.
Unlike traditional malware that depends on static commands or fixed scripts, PROMPTFLUX interacts directly with Google Gemini’s API to generate new behaviours on demand, effectively creating a shape-shifting digital predator capable of evading conventional detection methods.
A Glimpse into Adaptive Malware
PROMPTFLUX represents a major shift in how attackers use technology. Instead of pre-coded evasion routines, this malware dynamically queries AI models like Gemini for what GTIG calls “just-in-time obfuscation.” In simpler terms, it asks the AI to rewrite parts of its own code whenever needed — ensuring no two executions look alike.
This makes traditional, signature-based antivirus systems nearly powerless, as the malware continuously changes its fingerprint, adapting in real time to avoid detection.
How PROMPTFLUX Operates
The malware reportedly uses Gemini’s capabilities to generate new scripts or modify existing ones mid-operation. These scripts can alter function names, encrypt variables, or disguise malicious payloads — all without human intervention.
GTIG researchers observed that PROMPTFLUX’s architecture allows it to:
- Request on-demand functions through AI queries
- Generate obfuscated versions of itself in real time
- Adapt its attack vectors based on environmental responses
While still in developmental stages with limited API access, the discovery underscores how AI can be weaponised in cybercrime ecosystems.
Google’s Containment and Response
Google has moved swiftly to disable the assets and API keys associated with the PROMPTFLUX operation. According to GTIG, there is no evidence of successful attacks or widespread compromise yet. However, the incident stands as a stark warning — attackers are now experimenting with semi-autonomous, AI-driven code.
The investigation revealed that the PROMPTFLUX samples found so far contain incomplete functions, hinting that hackers are still refining the approach. But even as a prototype, it highlights the growing intersection of machine learning and malicious automation.
A Growing Underground AI Market
Experts warn that PROMPTFLUX is just the beginning. A shadow economy of illicit AI tools is emerging, allowing less-skilled cybercriminals to leverage AI for advanced attacks. Underground forums are now offering AI-powered reconnaissance scripts, phishing generators, and payload enhancers.
State-linked groups from North Korea, Iran, and China have reportedly begun experimenting with similar techniques — using AI to streamline reconnaissance, automate social engineering, and even mimic human operators in digital intrusions.
Defenders Turn to AI Too
The cybersecurity battle is no longer human versus human — it’s AI versus AI. Defenders are now deploying machine learning frameworks like “Big Sleep” to identify anomalies, reverse-engineer adaptive code, and trace AI-generated obfuscation patterns.
Security teams are being urged to:
- Prioritize behaviour-based detection over static signature scans
- Monitor API usage patterns for suspicious model interactions
- Secure developer credentials and automation pipelines against misuse
- Invest in AI-driven defensive frameworks that can predict evasive tactics
The Future: Cybersecurity in the Age of Adaptive Intelligence
PROMPTFLUX marks the early stage of a new class of cyber threats — self-evolving malware. As AI becomes more integrated into both legitimate development and malicious innovation, defenders must evolve just as quickly.
The next generation of cybersecurity will depend not only on firewalls and encryption but on the ability to detect intent — to distinguish between machine creativity and machine deception.