A recent cybersecurity investigation by Amazon Integrated Security has revealed a Russian-speaking hacker using generative AI to breach over 600 FortiGate firewalls in 55 countries within just five weeks. The campaign, which took place from January 11 to February 18, 2026, did not exploit zero-day vulnerabilities. Instead, the attacker targeted exposed management interfaces with weak credentials lacking multi-factor authentication (MFA) and leveraged AI tools to automate lateral movement and reconnaissance across compromised networks.
The compromised firewalls spanned regions including South Asia, Latin America, the Caribbean, West Africa, Northern Europe, and Southeast Asia. Amazon discovered the campaign after identifying a server hosting malicious tools used to target FortiGate devices. The attacker primarily used brute-force attacks on internet-exposed management ports—443, 8443, 10443, and 4443—to gain access opportunistically.
Once a firewall was breached, the actor extracted configuration files containing:
- SSL-VPN credentials with recoverable passwords
- Administrative credentials
- Firewall policies and internal network architecture
- IPsec VPN configurations
- Network topology and routing information
These files were processed using AI-assisted Python and Go tools, which, although functional for the attacker’s specific objectives, lacked robustness and failed in hardened environments. The AI-generated code featured redundant comments, simplistic architectures, naive JSON parsing, and incomplete documentation—hallmarks of unrefined AI-assisted development.
The threat actor employed custom reconnaissance tools to scan networks, classify network segments, identify SMB hosts and domain controllers, and use Nuclei for HTTP service discovery. Operational notes in Russian detailed the use of Meterpreter and Mimikatz for DCSync attacks to extract NTLM hashes from Active Directory. Backup servers running Veeam were also targeted with PowerShell scripts and credential-extraction tools, potentially to disable recovery options before a ransomware attack.
Amazon researchers observed that the attacker used at least two large language model (LLM) services to:
- Generate step-by-step attack methodologies
- Develop scripts in multiple programming languages
- Create reconnaissance frameworks
- Plan lateral movement strategies
- Draft operational documentation
In one case, the attacker submitted a complete internal network topology—including IPs, hostnames, credentials, and services—to an AI service for guidance on further spreading within the network. This campaign highlights how AI services are lowering the barrier to entry for low-to-medium skill threat actors, allowing them to conduct complex intrusions at scale.
Further technical analysis by Cyber and Ramen security revealed a misconfigured server exposing 1,402 files, including stolen FortiGate configuration backups, Active Directory data, credential dumps, vulnerability assessments, and attack planning documents. The server also hosted a custom Model Context Protocol (MCP) server named ARXON, which interfaced reconnaissance data with LLMs like DeepSeek and Claude to produce structured attack plans. These plans included guidance for gaining Domain Admin access, locating credentials, exploiting vulnerabilities, and lateral movement. Some tools even executed offensive operations autonomously using Impacket, Metasploit, and hashcat.
A Docker-based tool named CHECKER2 orchestrated scans of thousands of VPN targets across more than 100 countries. The campaign evolved over weeks, initially using an open-source HexStrike MCP framework before migrating to the customized ARXON system.
The findings underscore the multiplying effect of generative AI in cyberattacks, enabling attackers to scale intrusions, automate attack planning, and generate AI-assisted operational documentation. Amazon recommends FortiGate administrators to:
- Avoid exposing management interfaces to the internet
- Enable MFA for all accounts
- Ensure VPN passwords differ from Active Directory credentials
- Harden backup infrastructure
The campaign mirrors broader trends reported by Google, showing the increasing abuse of AI services across all stages of cyberattack
This campaign demonstrates how generative AI is being weaponized to amplify the capabilities of low-to-medium skill threat actors. By automating reconnaissance, lateral movement, and attack planning, AI allows hackers to scale operations across hundreds of devices worldwide. Organizations must prioritize MFA, limit internet-exposed management interfaces, and harden backup systems to reduce risk from such sophisticated AI-assisted intrusions.