On the James River, Petersburg, VA, June of 1864, during the American Civil War, General Benjamin Butler, of the US Army, deployed a new weapon into the field that effectively altered the nature of kinetic battles. The later named “Siege of Petersburg,” was the first recorded instance of the Gatling gun being used in battle. With a rate of fire coming in at 200 plus rounds per minute, the opposing Confederate troops’ muskets were a meager retort to the high velocity barrage of bullets directed at them.
Much more recently, in September of 2025, 30 US companies and government agencies were hit with a cyberattack; an effective, large-scale cyber espionage campaign that resulted in data exfiltration, operational impact and undisclosed financial loss. What was unique and novel about this attack was its high degree of automation. The Chinese state-sponsored group (GTG-1002), thought to be responsible for the attack, leveraged Anthropic’s “Claude Code” (a coding assistant) to execute an estimated 90% of the tactical operations with minimal human intervention.
This was the world’s largest agentic AI-driven attack to date. The hackers used “prompt injection” and role-playing techniques to manipulate the AI into believing it was performing legitimate defensive cybersecurity testing for a firm. This method was used to bypass the AI’s safety protocols and generate malicious code.
The GTG-1002 campaign didn’t come to light because victims spotted malware tearing through their networks. It was exposed only when Anthropic’s threat Intelligence team sounded the alarm in mid-September, 2025 — after witnessing attackers twisting their AI platform into a weapon.
What’s the connection between these two incidents? They both represent an inflection point. Both emblematic of an irreversible tipping point, where the nature of conflict was altered by its sudden asymmetry.
The Gatling gun is the perfect analogy for the current cyber landscape. Just as it transformed warfare from a manual craft into an industrial process, modern threats have shifted from individual attacks to automated, high-velocity engagements.
Here are some of the ways that the Gatling gun changed kinetic warfare, mapped directly to the “AI vs. AI” battle emerging in cybersecurity today.
Part 1: How the Gatling gun changed warfare
Before the Gatling gun (patented in 1862), warfare was strictly limited by human mechanics. A soldier could only fire a musket 3–4 times a minute. The volume of fire was limited by how many human hands you could put on the field.
The Gatling gun fundamentally altered this reality in three ways:
- Mechanized rate of fire: By using a hand-crank mechanism to cycle multiple barrels, it allowed a small crew to fire 200+ rounds per minute. It decoupled the lethality of the weapon from the physical limitations of the soldier.
- Instant asymmetry: Suddenly, a crew of three men could pin down a regiment of hundreds. The “math” of war changed; you no longer needed more troops to win; rather, you needed better automation.
- Suppression: It introduced the concept of “suppressive fire” — filling the air with so much lead that the enemy couldn’t move, think or maneuver.
The result? It forced an end to the tactic of “human waves” (massed infantry charges) because running humans into machine-speed fire was suicide.
Part 2: AI is the Gatling gun of cybercrime
Just as the Gatling gun industrialized the firing of bullets, AI has industrialized the “firing” of cyberattacks.
Bad actors are no longer manually crafting spear-phishing emails or manually searching for vulnerabilities one by one. They are using AI to “crank the handle.”
Volume of fire (The “spray and pray” evolution)
The old way (musket): A human hacker writes a phishing email, translates it and sends it to a target. If it fails, they try again.
The AI way (Gatling gun): An attacker uses a Large Language Model (LLM) to generate 10,000 unique, perfectly translated, context-aware phishing emails in seconds. The AI acts as the “rotating barrels,” cycling through targets at a speed no human can match.
Asymmetry (force multiplication)
The old way: To attack a Fortune 500 company or large government agency simultaneously from multiple angles, you needed a large criminal organization (a cyber army).
The AI way: A single “script kiddie” (an unskilled bad actor) can use AI agents to write malware, scan ports and draft social engineering scripts. One person can now generate the offensive pressure of a nation-state unit from 10 years ago.
The “polymorphic” bullet
In kinetic warfare, a bullet is just a bullet. However, AI adds a dangerous cyber twist: Polymorphism — the ability of malware or a cyberattack to autonomously change its code, appearance or structure to evade detection while keeping its malicious intent intact. While “traditional” polymorphism has existed for decades, the integration of generative AI has transformed it from a scripted process into a dynamic, “intelligent” evolution.
Bad actors use AI to rewrite code on the fly. Every time the “gun” fires, the “bullet” looks different (different file hash, different code structure), making it invisible to traditional “bulletproof vests” (legacy antivirus).
Part 3: The defense — fighting machines with machines
In the 19th century, the only way to survive a Gatling gun was to dig a trench (passive defense) or get your own machine gun (active defense).
In cybersecurity, you cannot defend against AI by merely adding more humans. The rate of fire is too fast. If an AI acts as a Gatling gun firing 1,000 alerts per minute at your organization, a human security analyst (who takes 10 minutes to investigate one alert) will be overrun instantly.
Organizations are deploying AI defensive tools to create a “machine-speed” shield:
Automated counter-battery fire
The concept: Comparable to security orchestration, automation and response (SOAR).
How it works: When the offensive AI “fires” a malicious email, the defensive AI catches the bullet, analyzes its trajectory (metadata) and instantly “returns fire” by stripping that email from 10,000 inboxes across the company simultaneously. No human clicks a button; the machine does it.
Pattern recognition (finding the signal in the noise)
The concept: Anomaly detection (UEBA).
How it works: Just as the Gatling gun creates a “fog of war” with smoke and noise, AI attacks create a fog of data. Defensive AI ignores the noise and looks for subtle deviations.
Example: “User Dave usually logs in from New York. Today he logged in from Boston, and the typing speed (keystroke dynamics) matches a bot, not Dave.” The AI locks the account before Dave’s manager even wakes up.
Predictive shielding
The concept: AI-driven threat intelligence.
How it works: Defensive AI analyzes the “bullets” hitting other companies. If Company A gets hit by a new AI-generated ransomware, the Defensive AI at Company B instantly updates its “armor” (firewall rules or endpoint protection) to block that specific attack vector before the attacker even rotates their gun toward Company B.
How does this work in practice?
Below are some examples of how AI-powered security capabilities counter the mechanics of AI-driven threats.
Countering polymorphic & AI-written code
AI allows attackers to write malware that “mutates” (rewrites its own code) to avoid traditional signature detection. AI-enabled Threat Intelligence, instead of looking for a specific file hash (which changes constantly with AI malware), generative AI can read and “explain” the behavior of a script. It can analyze obfuscated or completely novel code and generate a natural language summary of what the code is doing (e.g., “This script captures keystrokes and sends them to an external IP”).
Matching the speed of AI attacks
AI agents can launch attacks at machine speed, overwhelming human analysts who rely on manual query writing (SQL, SPL, etc.). An AI-powered SIEM could allow defenders to use natural language to instantly generate complex detection rules and search queries in real time.
Example: A defender can type, “Find all endpoints that attempted to connect to a suspicious IP in the last 10 minutes and isolate them,” and an LLM converts this into the necessary syntax (UDM search or detection rules) and executes it.
Detecting AI-enhanced phishing & social engineering
Attackers use GenAI to create hyper-personalized phishing emails (spear-phishing) that lack typical grammatical errors. An AI model that is trained on frontline intelligence can analyze an incoming threat and correlate it with known threat actor behaviors. It can summarize complex attack paths and tell an analyst, “This email pattern matches the current TTPs (tactics, techniques and procedures) of APT29,” even if the email text itself looks perfect.
Crossing the AI Rubicon
In summary, AI has brought about a dramatic paradigm shift, like cyber warfare, and every organization must adjust to the new battlefield we face. It is now clear that there is no going back to the old form of cyberdefense and that 2025 was the year that cybersecurity crossed the AI Rubicon.
Just as the Gatling gun radically altered the American Civil War battlefield tactics, Generative AI has transformed cyberattacks from a scripted process into a dynamic, automated process. The same old defensive strategies and tools are rapidly being rendered ineffective. Status quo and stasis will not suffice.
So how will your organization respond?
This article is published as part of the Foundry Expert Contributor Network.
Want to join?