A Russian-speaking threat actor is using commercial generative AI services to compromise hundreds of Fortinet Fortigate firewalls, warns Amazon Threat Intelligence.

Once on the network, the hackers successfully compromised Active Directory at hundreds of organizations, extracted complete credential databases, and targeted backup infrastructure — a potential precursor to ransomware deployment, the report adds.

The report, by CJ Moses, CISO of Amazon Integrated Security, is another signal that commercial AI services are lowering the technical barrier to entry for offensive cyber capabilities.

A single actor, or a very small group, generated its entire toolkit through AI-assisted development, Amazon says.

But the report is also a reminder to CSOs and IT leaders of all organizations of something they have known for decades: Failure to implement cybersecurity basics will inevitably lead to a breach of security controls. The compromised Fortigate firewalls in this campaign are being exploited not through product flaws, but through exposed management ports and weak credentials with only single-factor authentication. A primary tool was the use of a list of commonly reused credentials, otherwise known as a brute-force attack. These were “fundamental security gaps” that allowed AI to help an unsophisticated actor exploit at scale, the Amazon report says.

“When this actor encountered hardened environments or more sophisticated defensive measures, they simply moved on to softer targets rather than persisting,” says the report.

“Strong defensive fundamentals remain the most effective countermeasure,” for similar attacks, Amazon stresses. This includes patch management for perimeter devices, credential hygiene, network segmentation, and robust detection of post-exploitation indicators.

Jeff Pollard, a principle analyst at Forrester Research who leads research into the role of the CSO, noted that, unlike many other recent attacks on Fortinet, this campaign has to do with the configuration of the devices, not software vulnerabilities in the platform itself.

 “It’s a case of needing to follow the basics and, if anything, makes those basics more important,” he said. “What’s more interesting than the attack itself is the evidence that attackers used AI platforms to scale the attack to make it as far reaching as they did.

AI amplifies impact

“AI will do more than surface novel attacks,” he added. “It will also amplify the impact of all attacks, as this attack demonstrates. It lowers the barrier of entry to attackers and also ups the potential consequences of attacks at the same time. That’s not a combination IT, developers, or security practitioners needed, but alas, here we are.”

 The Amazon report comes on the heels of one from Palo Alto Networks that looked at 750 incidents and came to the same conclusion:  what is really killing organizations isn’t so much AI, but their basic security failings such as weak authentication, a lack of real-time visibility, and misconfigurations caused by a complex sprawl of security systems.

Amazon Threat Intelligence found that the Russian-speaking threat actor had been able to compromise over 600 FortiGate devices across more than 55 countries between January 11 and  February 18, all without exploiting any vulnerabilities. Instead it used unnamed commercial AI services, excluding AWS, to hack into weakly-protected FortiGate devices. AI just helped scale the attack.

“The threat actor in this campaign is not known to be associated with any advanced persistent threat group with state-sponsored resources,” the report says. “They are likely a financially motivated individual or small group who, through AI augmentation, achieved an operational scale that would have previously required a significantly larger and more skilled team.”

The gang also isn’t (or perhaps until now, wasn’t) smart: It left operational files including AI-generated attack plans, victim configurations, and source code for custom tooling on the publicly-accessible IT infrastructure that was hosting its attacks.

“It’s like an AI-powered assembly line for cybercrime, helping less skilled workers produce at scale,” Amazon researchers said.

After stealing admin credentials, firewall policies, network topology, and routing information, as well as IPsec VPN peer configurations, the threat actor used AI-assisted Python scripts to parse, decrypt, and organize these stolen configurations.

Following achieving VPN access to victim networks, Amazon says the threat actor deploys a custom network reconnaissance tool, with different versions written in both Go and Python. Analysis of the source code reveals clear indicators of AI-assisted development such as redundant comments that merely restate function names, simplistic architecture with disproportionate investment in formatting over functionality, naive JSON parsing via string matching rather than proper deserialization, and compatibility shims for language built-ins with empty documentation stubs. While functional for the threat actor’s specific use case, the tooling lacks robustness and fails under edge cases, characteristics, Amazon says, typical of AI-generated code used without significant refinement.

Recommendations

The Amazon report makes a number of recommendations to network admins with FortiGate devices. They include ensuring device management interfaces aren’t exposed to the internet, or, if they have to be, restricting access to known IP ranges and using a bastion host or out-of-band management network. As basic cybersecurity demands, all default and common credentials for FortiGate appliances should be changed. They should ensure multifactor authentication is implemented for all admin and VPN access, and make sure there is no password reuse between FortiGate VPN credentials and Active Directory domain accounts.

To avoid their systems being exploited, IT admins in firms using AWS are advised to enable Amazon GuardDuty for threat detection, monitoring for unusual API calls and credential usage patterns, use Amazon Inspector to automatically scan for software vulnerabilities and unintended network exposure, and use AWS Security Hub to maintain continuous visibility into their security posture.

Fernando Montenegro, cybersecurity practice lead at Futurum, said organizations are still coming to terms with the acceleration and augmentation that AI can bring to adversaries. In this case, he said, the threat researchers highlighted how adversaries likely leveraged AI capabilities to create crude but effective tools to support their campaign. This is the same kind of capability that allows a non-malicious user to ‘vibe code’ something for a narrow use case, but instead of a benign app, it’s a malicious tool.

Raises the bar for security

Organizations always deal with constraints that are not visible to outside observers, so ‘implementing security basics’ may, in many cases, not be a simple endeavor, he added. Most security teams deal with numerous competing priorities and limited budgets, and must constantly balance a mixture of new-initiative and steady-state operational activities. 

“What this incident, and others, are making abundantly clear is that the augmentation of attackers through AI is constantly and quickly raising the bar in what is considered acceptable security practices moving forward,” he also said. “This will require organizations to spend more cycles making sure that these weaker security practices be quickly removed from their environment, lest they fall prey to nimble(r) attackers.”

In a LinkedIn blog, Amazon CISO Moses noted that organizations with strong credential hygiene, MFA, and proper network segmentation successfully blocked these attacks. “And while AI is lowering the barrier to entry for attackers,” he added, “it’s an equally powerful tool for defenders, helping security teams detect threats faster, automate response at scale, and stay ahead of evolving tactics. As attack volumes grow from both skilled and unskilled adversaries, the same defensive basics that protected against this campaign will remain your most effective countermeasure.”

In response to questions from CSO, he added that the Russian group’s success “fundamentally demonstrates that threat actors often choose the path of least resistance. When basic security controls like multi-factor authentication, proper network segmentation, and credential management aren’t in place, even unsophisticated actors can achieve strategic objectives at scale. The AI simply amplified their efficiency.”

Asked why IT leaders are still unable to implement cybersecurity basics, he said, “The challenge isn’t knowledge, it’s operating in resource-constrained environments where technical debt and competing business priorities create systematic gaps in foundational security. Legacy systems, budget constraints, and rapid digital transformation often force difficult trade-offs, but threat actors are now leveraging AI to exploit these exact vulnerabilities at machine speed. The path forward requires making security fundamentals so embedded that they become operationally resilient, even under resource pressure.”

Read More