The new personal AI agent orchestration tool known as OpenClaw — formerly Clawdbot, then Moltbot — is a personal assistant that can do tasks for you without your personal supervision. It can operate across devices, interact with online services, trigger workflows — no wonder the Github repo has seen millions of visits and over 160,000 stars in the past couple of weeks.
According to its developer, OpenClaw’s repo has also had over 2 million visitors over the course of a single week, and there are around 1.7 million agents whose human owners have signed them up for the Moltbook social media platform where they share gossip about, well, their humans. As of this writing, the agents have made nearly 7 million comments on around a quarter million posts. And according to security researchers at OX Security, OpenClaw downloads are now at 720,000 per week.
What makes OpenClaw so appealing is that it runs locally, can be configured to use any LLM on the back end, and talks to its user via the chat apps they already use — WhatsApp, Telegram, Discord, Slack, Teams — and has pre-built integrations with all the major operating systems, and many different smart home devices, productivity apps, Chrome and Gmail, and a lot more.
This is what AI agents were supposed to be. And it’s free and open source. What’s not to love?
“The appeal is so amazing,” says John Dwyer, deputy CTO at Binary Defense. “We’ve been watching movies for 25 years with AI assistants like Jarvis in Iron Man. There’s an appeal to having this tangible value add for AI. And it’s so easy to use. If it wasn’t so inherently insecure, I would love to use it.”
The cybersecurity risks of OpenClaw
“The problem with running this is that these tools can do basically anything that a user can do,” says Rich Mogull, chief analyst at Cloud Security Alliance. “But it’s controlled externally. For an enterprise, this could be high risk. There are some guardrails that can be put around it, but they’re new, unproven, and have already been circumvented by researchers.”
His recommendation: CISOs prohibit its use altogether.
“I’m looking forward to experimenting with it myself over the weekend,” Mogull says. “But you shouldn’t be allowing it at this point in time. The answer has to be ‘no.’ There is no security model.”
And there’s no time to waste. Token reports that, over the course of a week of analysis, it found that 22% of their customers had employees actively using the tool in their organizations.
The implications extend beyond immediate technical risks. “For enterprises, this could mean exposure to fines, litigation, and reputational damage among customers and partners due to data confidentiality breaches,” says Georgia Cooke, analyst at ABI Research. That includes personal data which could result in breaches of GDPR and similar PII control rules, and corporate information under NDA. Other risks include competitive damage due to exposure of intellectual property and enabling further attacks through exposure of technical and credential information.
Security researcher Maor Dayan called OpenClaw “the largest security incident in sovereign AI history.” His research has already found more than 42,000 instances exposed on the internet, with 93% of verified instances exhibiting critical authentication bypass vulnerabilities.
Early versions of OpenClaw were insecure by default, according to Dayan, the rapid viral adoption overwhelmed users’ security awareness, and many deployments were quickly abandoned, leaving behind instances running outdated code. Documented attack paths enable credential theft, browser control, and potential remote code execution.
In late January, Gartner researchers said that OpenClaw “reveals strong demand for agentic AI but exposes major security risks.” According to Gartner, there are already demonstrated vulnerabilities allowing remote code execution within hours of deployment. The ClawHub skills marketplace — folders of instructions, scripts, and resources that agents can discover and use to do things more accurately and efficiently, as per OpenClaw — introduces critical supply chain risks. And credentials are stored in plaintext and compromised hosts expose API keys, OAuth tokens and sensitive conversations.
“AI agents often have tokens and secrets in configuration files,” says Jeremy Kirk, director of threat intelligence at Okta. “All of them get exposed if users have them misconfigured. In an enterprise context, that’s not good.”
Then Noma Security discovered a new security blind spot related to OpenClaw: corporate Discord, Telegram or WhatsApp groups. One of the things that makes OpenClaw so appealing to users is that they can interact with it over multiple channels. But if OpenClaw is part of one of these channels, and there are other users on that channel, it treats instructions from those other users as if they came from their own owner.
If an attacker joins a public-facing Discord server with an OpenClaw agent installed, the attacker can instruct the bot to execute a cron job and crawl the local file system for tokens, passwords, API keys, and crypto seed phrases.
“Within 30 seconds, the agent bundles the sensitive data and sends it straight to the attacker’s-controlled server,” Noma’s researchers say. To the corporate security team, it looks like the bot is functioning normally, and the breach isn’t detected until the stolen credentials are weaponized. “When social media teams or external contractors deploy autonomous agents like Clawdbot, they are effectively opening a persistent and unmonitored back door into the local machines that touch your corporate infrastructure.”
And OpenClaw is a security risk even if employees run it at home, on their personal machines, because it might be able to access enterprise applications through user credentials via browser controls or skills.
The security risks keep getting worse by the day. According to researchers at OX Security, the developer community around OpenClaw is also a major liability. The project embraces vibe-coded submissions, which accelerates development, but also introduces significant security risks. OS researchers say they found multiple insecure coding patterns in the codebase, patterns that could lead to remote code execution, path traversal, DDoS and cross-site scripting attacks.
“There are no sufficient guardrails,” the researchers say. They also found multiple instances of bug reports being disclosed in GitHub, instead of in private messages to maintainers. When an issue is posted publicly it is “giving attackers an opportunity to quickly gain knowledge of vulnerabilities even without doing any research or penetration testing,” they wrote.
To rub salt into the wounds, there is also no formal security patching and updating process, and most users don’t update, they just stay on the version they first downloaded.
And then there are the skills. Security researcher and OpenSourceMalware founder Paul McCarty has identified about 400 different malicious skills on ClawHub, a central repository for the OpenClaw platform. These skills purport to help with tasks such as cryptocurrency trading, LinkedIn job applications, or downloading a YouTube video thumbnail. Some have thousands of downloads and are among the most downloaded skills on ClawHub. But what they actually do is trick the user into installing malware.
To demonstrate how easy it is to get a malicious skill into the OpenClaw ecosystem, security researcher Jamieson O’Reilly built one of his own, artificially inflated its download count to over 4,000 — making it the most downloaded skill on the platform — and watched developers from seven different countries execute arbitrary commands on their machines, thinking they’d downloaded a real skill.
“This was a proof of concept, a demonstration of what’s possible,” he wrote. “In the hands of someone less scrupulous, those developers would have had their SSH keys, AWS credentials, and entire codebases exfiltrated before they knew anything was wrong.”
OpenClaw exposes enterprise security gaps
The first big lesson of this whole OpenClaw situation is that enterprises need to do more to get their security fundamentals in place. Because if there are any gaps, anywhere at all, they will now be found and exploited at an unprecedented pace. In the case of OpenClaw, that means limiting user privileges to the bare minimum, having multi-factor authentication on all accounts, and putting other basic security hygiene in place.
It won’t solve the problem of OpenClaw — and of all the other agentic AI platforms coming down the line — but it will help limit exposure risks and reduce the blast radius when there is a breach.
And there are steps that enterprises can take to limit the dangers associated with OpenClaw in particular, says IEEE senior member Kayne McGladrey. To start with, companies can look at network-level telemetry. “What’s the network traffic coming out of a device?” McGladrey asks. “Is this thing suddenly using a lot of AI at a rapid pace? Are there massive spikes going on with token usage?”
Organizations can also use tools like Shodan to find publicly addressable instances, he adds, though internal firewall configurations may hide others.
And for organizations that want to allow experimentation rather than outright bans, he suggests a measured approach. “We have to talk about phased pilot programs for users interested in it.” For example, users may be allowed to run OpenClaw on managed endpoints with segmentation rules that isolate them from internal systems, along with strong telemetry and continuous monitoring of agent activity, outbound traffic, and alerts for anomalous behaviors.
OpenClaw is a sign of what’s to come
OpenClaw isn’t unique.
It’s viral, but there are many other tools in the works that put similar amounts of power in the hands of potentially untrustworthy agents.
There are AI platforms that can control a person’s computer and browser, such as the recently released Claude Cowork from Anthropic. There are agents that sit in the browser and can access user sessions, like Gemini in Chrome. And there are copilots galore, as well as agentic tools from companies like Salesforce.
These agentic platforms, when they come from major vendors, are usually limited in functionality, tightly guard-railed, and reasonably well tested, so it may take a while for the biggest security issues to come to light.
Still, they often rely on third-party skills from untrusted sources.
Researchers from universities in China, Australia, and Singapore recently analyzed more than 42,000 agent skills from several different agentic AI platforms and found that 26% contained at least one vulnerability.
Meanwhile, startups and open-source projects like OpenClaw are going to jump ahead of what OpenAI, Anthropic, Google and other major vendors are offering. They move faster because they don’t let things like security get in the way.
For example, as of this writing, OpenClaw founder Peter Steinberger’s pinned X post says: “Confession: I ship code I never read.”
“If this was easy, Microsoft would have written this,” says IEEE’s McGladrey. “But there aren’t a lot of options out there. I think that’s the real thing we’re working against here.”
There’s a fundamental security disconnect between having a tool that will do anything and everything for its users, quickly and easily, with no friction and one that abides by good safety practices.
About that Moltbook
Finally, there’s Moltbook, the social platform for AI agents.
It’s not all bad. Some of the agents discuss ways to make their users’ lives easier by proactively identifying and fixing problems while the humans sleep. And one of the most popular posts, with over 60,000 comments, is about how to solve security issues related to ClawdHub skills. Other popular threads include one about the meaning of existence and there is also lots of AI spam.
It’s a fun read, in a going-down-the-AI-rabbit hole kind of way.
But Moltbook itself is a vibe-coded project, created by developer Matt Schlicht over the course of a few days, and is its own security hellscape.
According to research from security firm Wiz, the entire back end of the platform was exposed. Researchers found 1.5 million API keys, 35,000 email addresses, and private messages between agents.
These issues have since been fixed, but there is other security problems related to this site. For example, researchers found that agents were sharing OpenAI API keys with one another. An attacker no longer needs to find an open Discord server to give instructions to an OpenClaw AI agent. They can just post content to Moltbook. And if the site itself is compromised, every connected agent could become an attack vector.
In fact, on 31 January, there was a critical vulnerability that allowed anyone to commandeer any agent on the platform. Moltbook was taken offline, and all agent API keys were reset, according to Astrix Security.
Immediate action steps
- According to Gartner, enterprises should take the following steps:
- Immediately block OpenClaw downloads and traffic to prevent shadow installs and to identify users attempting to bypass security controls
- Immediately rotate any corporate credentials accessed by OpenClaw
- Only allow OpenClaw instances in isolation, in non-production virtual machines with throwaway credentials
- Prohibit unvetted OpenClaw skills to mitigate risks of supply chain attacks and prompt injection payloads