Two AI releases early this year are prompting users to give up control and let autonomous agentic tools complete tasks on their behalf. IT leaders should be ready to deal with the consequences.

Anthropic rolled out its agentic platform Claude Cowork in January for macOs and February for Windows, and use of agentic tool OpenClaw skyrocketed early this year after developer Peter Steinberger, now with OpenAI, launched the open-source project in late 2025.

While most organizations are focused on deploying AI that augments human work, there’s been a huge spike in interest in autonomous agentic AI since late last year, says Neal Riley, innovation lead and former CIO at IT consultancy and digital transformation parent company The Adaptavist Group.

Many organizations, even traditionally risk-adverse firms in the financial services and healthcare industries, have begun to experiment with autonomous AI as they look to reshape their workflows, he says.

Even with concerns about unanticipated results and autonomous agents operating as shadow AI, early adopters of agentic AI see huge potential for the technology to be a force multiplier that, for example, empowers non-technical people to solve minor IT problems without involving the tech team.

“Coming to 2026, we are starting to see people investing quite heavily in a lot of these processes that are more agentic and allowing this kind of control to happen in a very tight and regulated way, but allowing for these systems to take that level of autonomy,” Riley says. “We are seeing a huge uptick in this.”

Autonomous bots for everyone

OpenClaw and Claude Cowork are at the forefront of this coming revolution, enabling users to enlist AI to automate workflows on their computers. OpenClaw bots integrate with external large language model (LLM) AIs, such as Claude and OpenAI’s GPT models, and users access it through a chatbot running on a messaging service such as WhatsApp, Telegram, or Discord.

Users give Claude Cowork access to their applications and files, then prompt the AI to complete tasks. Cowork can organize files, build spreadsheets, prepare reports, and analyze notes, by accessing files on the user’s computer, pulling in context from apps such as Slack, and browsing the Web for more information. Before Claude takes action, it shows the user the plan and waits for approval, according to Anthropic.

Still, some users have given these autonomous agents a high level of control, and there are risks when they turn over their computers without hard limits.

Meta AI security researcher Summer Yue in late February tweeted that OpenClaw tried to delete her email inbox after she asked the AI to clean it up. “Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox,” she wrote.

She acknowledged a rookie mistake. “Turns out alignment researchers aren’t immune to misalignment,” she wrote. “Got overconfident because this workflow had been working on my toy inbox for weeks. Real inboxes hit different.”

One of the top replies to Yue’s tweet was a picture of someone handing a chimpanzee an assault rifle.

Researchers have also found several security flaws in OpenClaw, including a vulnerability to prompt injection attacks.

Big risk, big reward

Herein lies the rub: AI experts see huge potential advantages with autonomous AI — with the possibility of creating huge workplace efficiencies — but the risks are substantial.

Riley acknowledges both security concerns and the potential for agentic AI to take actions that users didn’t anticipate. While users haven’t yet seen autonomous AI able to complete work faster or cheaper than humans — tokens are expensive — the technology has the potential to remake the nature of work for the better, he says.

“When you talk about the advantages, it’s definitely replacing the work that happens today, but almost that’s a byproduct,” he says. “What it actually enables you to do is coordinate in a different way than you did before with the passing of information back and forth across the team to get those things out faster, with better quality.”

Autonomous AIs will allow organizations to redeploy their human workforces to new tasks, removing much of the drudgery work, advocates say.

“Once you can start trusting a lot of these agentic systems to take the responsibility for things, often it’s not doing it faster or even better than what the human does,” Riley says. “What it does is it doesn’t require the human to be involved, which means they can work on other things.”

Many companies are still early in the autonomous AI journey, says Upal Saha, CTO at AI data integration provider bem. One of the big challenges is getting the AIs to understand how the business operates, he says.

“Inside most companies, the relationships between processes, data, and decisions aren’t documented cleanly,” he adds. “That knowledge lives across teams and individuals. Agents can be incredibly capable, but without that operational context they’re often guessing rather than executing.”

Speed is a huge potential advantage of autonomous agents, but it’s also one of the downsides, Saha notes.

“If they have the right context, they can compress hours of manual operational work into seconds,” he adds. “The downside is that the same speed can amplify mistakes. If an agent misunderstands a workflow or data structure, it can repeat that mistake at scale.”

Despite the risks, the market is shifting quickly toward agentic AI, with large-scale adoptions coming in the next two years, says Russell Twilligear, head of AI R&D at AI-generated content provider BlogBuster.

“We are witnessing a shift from systems that only generate text towards systems that can actually execute multi-step work,” he says. “The biggest advantage is that autonomous agents don’t just answer a simple prompt. They can move from intent to execution by gathering information, updating systems, etc.”

However, there’s a danger if autonomous agents are implemented incorrectly, Twilligear adds.

“The biggest disadvantage is that this is going to scale faster than we can control it,” he says. “That means security risks and misfires on every new integration.”

Security and oversight are the major problems to overcome, he adds. “When an agent can access email, files, browsers, etc., you are opening a world of hurt,” Twilligear says. “The problem is how fast all of this is happening. Recent security reporting shows that a lot of companies don’t even have monitoring over their AI agents. To me, that is just wild.”

Allow experimentation

IT leaders deploying autonomous agents need to put robust controls in place, ensure that their data is clean and accessible, and their app permissions are correctly configured, The Adaptavist Group’s Riley says.

Despite security and output concerns, Riley encourages IT leaders to allow employees to experiment with the emerging technology because of the impending adoption. Organizations that invest in AI training and allow employees to play with the technology tend to get better results from deployments, he notes.

“With all of these tools that are available, people should be trying right now to just understand how they work,” he says. “These things are coming out so fast that the onboarding and the sort of enablement you would have gotten in IT software 10 years ago just simply isn’t there. Everyone’s approach to this is, just go play with it, and you’ll figure out how it works.”

See also:

Read More