Yotam Gutman
13.4.2026
A global FortiGate campaign shows how generative AI and MCP orchestration are turning common security gaps into scalable, AI-native cyber attacks against internet-exposed devices.
A recent cyber campaign targeting Fortinet FortiGate devices marks a turning point in how attacks are executed. Over the course of just five weeks, more than 600 firewalls across 55 countries were compromised. There were no zero-day vulnerabilities, no advanced exploitation chains, and no evidence of elite threat actors. Instead, the campaign relied on exposed management interfaces, weak credentials, and the absence of multi-factor authentication.
On the surface, this looks like a familiar story. These weaknesses have been known for years. What makes this campaign different is not what was exploited, but how it was executed.
The attacker used commercial generative AI tools to automate and scale the entire operation. Tasks that previously required time, expertise, and manual effort were compressed into an efficient, repeatable workflow. Reconnaissance, script generation, credential testing, and data extraction were all accelerated. The result was not a more sophisticated attack, but a dramatically more scalable one.
Once access was achieved, the attacker extracted sensitive configuration data from the firewalls, including VPN credentials, administrator accounts, firewall rules, and internal network topology. This information effectively turned perimeter devices into entry points for deeper network access. The attack did not stop at compromise; it created the conditions for persistence and lateral movement.
What is particularly important is that this campaign was opportunistic. It did not target specific organizations. Instead, it scanned the internet for exposed systems and exploited whatever it found. This reflects a broader shift in cyber operations, where scale replaces precision. If a system is accessible and vulnerable, it becomes a target.
However, the most significant development goes beyond AI-assisted automation. Investigations uncovered a custom attack framework built around LLMs, coordinated through a Model Context Protocol (MCP) server. This MCP layer acted as an orchestration engine, connecting multiple AI models and managing the flow of the attack.
This changes the role of AI entirely. Instead of being used as a tool for isolated tasks, AI became part of a continuous system that operated across the entire kill chain. The MCP framework enabled persistent context, allowing the attack to evolve as it progressed. It delegated tasks between models, generated commands dynamically, and adapted based on results. In effect, it turned a sequence of actions into a semi-autonomous process.
The attack lifecycle itself was fully integrated. Large-scale scanning identified thousands of potential targets across continents. Credential abuse was executed at scale without the need for exploits. Configuration data was extracted and analyzed, enabling the system to understand internal environments and prioritize further actions. Decision-making was no longer static. The system adjusted its behavior in real time, moving on from hardened targets and focusing on easier ones.
This represents a fundamental shift from manual execution to AI-driven orchestration. The attacker was no longer just running tools. They had built a system that could run the operation.
This development aligns closely with broader findings from Google’s Threat Intelligence Group, which describes the evolution of adversarial AI use across three stages: distillation, experimentation, and integration. The FortiGate campaign clearly sits in the integration phase, where AI is embedded across the entire attack lifecycle. Integration means continuity. AI is not used at a single step, such as generating phishing emails or writing code. It is used throughout the entire process, with outputs from one stage feeding directly into the next. The MCP framework exemplifies this by maintaining context and enabling coordinated decision-making across multiple stages of the attack. At the same time, attackers are moving through experimentation. They are testing different models, chaining them together, and refining their workflows in real time. This allows them to optimize attacks during execution, rather than relying on pre-defined playbooks. The use of multiple models within the MCP environment reflects this dynamic approach.
Distillation introduces another dimension. Attackers are beginning to replicate the capabilities of advanced models, reducing their reliance on external providers. Over time, this could lead to self-contained attack systems that operate independently, embedding AI directly into attacker-controlled infrastructure. The modular nature of MCP-based frameworks makes this transition more likely. Taken together, these developments point to the emergence of AI-native cyber attacks. These are not traditional attacks enhanced by AI, but attacks designed around AI from the ground up. They are modular, reusable, and adaptive. They scale horizontally, targeting large numbers of systems rather than specific high-value targets. They evolve continuously, improving through iteration and feedback.
The implications for organizations are significant. The traditional model of cybersecurity assumes that attackers are constrained by skill, time, and resources. It assumes that sophisticated attacks are relatively rare, and that there is a window of opportunity to detect and respond. These assumptions are increasingly outdated. In an AI-driven threat landscape, the limiting factor is no longer expertise. It is access. If a system is exposed and vulnerable, it can be discovered and exploited at scale. The speed of execution reduces the effectiveness of reactive defenses. Detection and response remain important, but they are no longer sufficient on their own. The FortiGate campaign illustrates this clearly. The attack did not succeed because of a novel technique. It succeeded because it removed the friction from exploiting known weaknesses. AI compressed the cost of execution, turning common misconfigurations into global exposure.
This shifts the focus from defending against sophisticated threats to eliminating basic exposure. Remote access systems, management interfaces, and authentication mechanisms become critical points of risk. When these systems are accessible and insufficiently protected, they become part of a global attack surface that can be continuously scanned and exploited. The broader lesson is not about Fortinet or any specific technology. It is about the changing nature of cyber risk. Attacks are becoming systems rather than events. They are designed to run continuously, adapt dynamically, and scale without human limitations.
AI did not introduce a new vulnerability in this campaign. It made existing vulnerabilities easier to exploit, faster to execute, and harder to defend against.
And that is the real shift.
Empower global teams with secure, hardware-enforced remote access, no VPNs, no data exposure, no risk.