In a newly uncovered campaign, threat actors embedded a previously undocumented backdoor, dubbed SesameOp, which exploits the OpenAI Assistants API for relaying commands and exfiltrating results.
According to researchers at Microsoft, the campaign was active for months before detection, and relied on obfuscated .NET libraries loaded via AppDomainManager injection into compromised Visual Studio utilities.
“Instead of relying on more traditional methods, the threat actor behind this backdoor abuses OpenAI as a C2 channel as a way to stealthily communicate and orchestrate malicious activities within the compromised environment,” the researchers said in a blog post.
The exploit does not abuse a vulnerability in the AI service itself but misuses a legitimate API in a clever way, raising the bar for detection and defence teams.
Exploiting the Assistants API
The backdoor’s infection chain begins with a loader, “Netapi64.dll,” injected into a host executable via NET AppDomainManager injection, a stealthy method used to evade conventional monitors. Once active, the implanted component accesses the OpenAI Assistants API using a hard-coded API key, treating the Assistants infrastructure as a storage and relay layer.
“In the context of OpenAI, Assistants refer to a feature within the OpenAI platform that allows developers and organizations to create custom AI agents tailored to specific tasks, workflows, or domains,” Microsoft researchers said. “These Assistants are built on top of OpenAI’s models (like GPT-4 or GPT-4.1) and can be extended with additional capabilities.”
The malware fetches command payloads embedded in “Assistants” descriptions (which can be set to values like “SLEEP”, “Payload”, “Result”), then decrypts, decompresses, and executes them locally. After execution, the results are uploaded back via the same API, much like the “living off the land” attack model, but in an AI cloud context.
Because the attacker uses a legitimate cloud service for command-and-control, detection becomes harder, researchers noted. There’s no C2 domain, only benign-looking traffic to api.openai.com.
Lessons for defenders and platform providers
Microsoft clarified that OpenAI’s platform itself wasn’t breached or exploited; rather, its legitimate API functions were misused as a relay channel, highlighting a growing risk as generative AI becomes part of enterprise and development workflows. Attackers can now co-opt public AI endpoints to mask malicious intent, making detection significantly harder.
In response, Microsoft and OpenAI acted to disable the attacker-linked accounts and keys. The companies also urged defenders to inspect logs for outbound requests to unexpected domains such as api.openai.com, particularly from developer machines. Enabling tamper protection, real-time monitoring, and block mode in Defender, Microsoft said, can help detect lateral movements and the injection patterns used by SesameOp.
“Microsoft Defender Antivirus detects this threat as ‘Trojan:MSIL/Sesameop.A (loader)’ and ‘Backdoor: MSIL/Sesameop.A(backdoor),” researchers added. Attackers continue finding inventive ways to weaponize AI. Recent disclosures have shown autonomous AI agents deployed to automate entire attack chains, generative AI used to accelerate ransomware campaigns, and prompt-injection techniques to weaponize coding assistants.