Cybersecurity researchers have uncovered what they say is the earliest known example of malware embedding large language model (LLM) technology, signaling a watershed moment in the evolution of cybercrime.
The malware, codenamed MalTerminal, was analyzed by SentinelOne’s SentinelLABS team and presented at the LABScon 2025 security conference. Unlike conventional malicious code, MalTerminal integrates OpenAI’s GPT-4 to dynamically generate harmful payloads, representing a shift toward what experts are calling “LLM-embedded malware.”
MalTerminal: Proof-of-Concept or Weapon in Waiting?
The malware sample, discovered in a Windows executable, contained an API endpoint for OpenAI’s chat completions model that was deprecated in November 2023. Researchers believe this means MalTerminal was written prior to that date, making it the earliest recorded instance of AI-enhanced malware.
FutureCrime Summit 2026: Registrations to Open Soon for India’s Biggest Cybercrime Conference
The executable prompts a user to select between two attack modes: “ransomware” or “reverse shell.” Present alongside the binary are Python scripts with identical functionality, further demonstrating the malware’s modular design.
Interestingly, SentinelOne found no evidence of MalTerminal being deployed in real-world attacks. This suggests it may have originated as a red team tool or proof-of-concept experiment rather than an active weapon. However, the very fact that it embeds an LLM to generate malicious logic at runtime poses significant concerns.
A Qualitative Shift in Adversary Tactics
According to SentinelOne researchers Alex Delamotte, Vitaly Kamluk, and Gabriel Bernadett-Shapiro, the incorporation of LLMs into malware is not merely an incremental change but a qualitative leap in tradecraft.
By leveraging GPT-4, MalTerminal can produce ransomware code, reverse shells, or even generate analysis reports when combined with a defensive script called FalconShield, which asks the AI to identify malicious patterns in Python files.
“The incorporation of LLMs into malware marks a qualitative shift in adversary tradecraft,” SentinelOne said. “It allows malicious code to evolve dynamically, bypassing static defenses and complicating incident response.”
LLM Poisoning in Phishing Campaigns
MalTerminal is not an isolated case. Other security firms are tracking how adversaries are weaponizing AI models in phishing campaigns.
A report from StrongestLayer revealed that attackers are embedding hidden prompts inside phishing emails to trick AI-based email security tools into misclassifying malicious messages as safe.
One campaign disguised itself as a billing discrepancy notice. While the visible language appeared professional and benign, the hidden HTML code used prompt injections—with styles set to display:none; color:white; font-size:1px;—to manipulate AI filters.
This enabled emails carrying an HTML attachment to evade defenses. When opened, the attachment exploited the Follina vulnerability (CVE-2022-30190) to execute an attack chain: dropping PowerShell scripts, disabling Microsoft Defender, and installing persistence mechanisms.
Rise of AI-Powered Phishing Infrastructure
The misuse of AI isn’t limited to LLM prompts. A Trend Micro report highlights a surge in attackers using AI-driven site builders like Lovable, Netlify, and Vercel to host phishing campaigns since January 2025.
These platforms, originally designed for rapid and legitimate website creation, are being repurposed to host fake CAPTCHA pages that mask credential-harvesting redirects.
- Victims see only the CAPTCHA page, lowering suspicion.
- Automated scanners detect only the benign CAPTCHA, missing the redirect.
- Attackers exploit free hosting, automation, and recognizable branding to scale operations cheaply and quickly.
Trend Micro researchers Ryan Flores and Bakuei Matsukawa describe such AI-powered hosting services as a “double-edged sword”: while transformative for businesses, they also lower the barrier of entry for cybercriminals.
Implications: A New Era of AI-Powered Cybercrime
The discovery of MalTerminal and the rise of AI-powered phishing techniques underscore a sobering reality: AI is now a weapon in the cybercriminal arsenal.
From dynamically generating malicious payloads to deceiving AI-based security systems, adversaries are exploiting the very technologies designed to strengthen defenses.
For enterprises, this means:
- Proactive AI security audits are no longer optional.
- Stress-testing defenses against prompt injection and LLM poisoning is critical.
- Investment in AI-driven detection systems that can adapt as quickly as attackers innovate will determine resilience.
As one expert put it: “We’ve entered a phase where AI is both shield and sword. The defenders who master it first will shape the battlefield.”
