For years, CSOs have worried about their IT infrastructure being used for unauthorized cryptomining. Now, say researchers, they’d better start worrying about crooks hijacking and reselling access to exposed corporate AI infrastructure.

In a report released Wednesday, researchers at Pillar Security say they have discovered campaigns at scale going after exposed large language model (LLM) and MCP endpoints – for example, an AI-powered support chatbot on a website.

“I think it’s alarming,” said report co-author Ariel Fogel. “What we’ve discovered is an actual criminal network where people are trying to steal your credentials, steal your ability to use LLMs and your computations, and then resell it.”

“It depends on your application, but you should be acting pretty fast by blocking this kind of threat,” added co-author Eilon Cohen. “After all, you don’t want your expensive resources being used by others. If you deploy something that has access to critical assets, you should be acting right now.”

Kellman Meghu, chief technology officer at Canadian incident response firm DeepCove Security, said that this campaign “is only going to grow to some catastrophic impacts. The worst part is the low bar of technical knowledge needed to exploit this.”

How big are these campaigns? In the past couple of weeks alone, the researchers’ honeypots captured 35,000 attack sessions hunting for exposed AI infrastructure.

“This isn’t a one-off attack,” Fogel added. “It’s a business.” He doubts a nation-state it behind it; the campaigns appear to be run by a small group.

The goals: To steal compute resources for use by unauthorized LLM inference requests, to resell API access at discounted rates through criminal marketplaces, to exfiltrate data from LLM context windows and conversation history, and to pivot to internal systems via compromised MCP servers.

Two campaigns

The researchers have so far identified two campaigns: One, dubbed Operation Bizarre Bazaar, is targeting unprotected LLMs. The other campaign targets Model Context Protocol (MCP) endpoints. 

It’s not hard to find these exposed endpoints. The threat actors behind the campaigns are using familiar tools: The Shodan and Censys IP search engines.

At risk: Organizations running self-hosted LLM infrastructure (such as Ollama, software that processes a request to the LLM model behind an application; vLLM, similar to Ollama but for high performance environments; and local AI implementations) or those deploying MCP servers for AI integrations.

Targets include:

  • exposed endpoints on default ports of common LLM inference services;
  • unauthenticated API access without proper access controls;
  • development/staging environments with public IP addresses;
  • MCP servers connecting LLMs to file systems, databases and internal APIs.

Common misconfigurations leveraged by these threat actors include:

  • Ollama running on port 11434 without authentication;
  • OpenAI-compatible APIs on port 8000 exposed to the internet;
  • MCP servers accessible without access controls;
  • development/staging AI infrastructure with public IPs;
  • production chatbot endpoints (customer support, sales bots) without authentication or rate limiting.

George Gerchow, chief security officer at Bedrock Data, said Operation Bizarre Bazaar “is a clear sign that attackers have moved beyond ad hoc LLM abuse and now treat exposed AI infrastructure as a monetizable attack surface. What’s especially concerning isn’t just unauthorized compute use, but the fact that many of these endpoints are now tied to the Model Context Protocol (MCP), the emerging open standard for securely connecting large language models to data sources and tools. MCP is powerful because it enables real-time context and autonomous actions, but without strong controls, those same integration points become pivot vectors into internal systems.”

Defenders need to treat AI services with the same rigor as APIs or databases, he said, starting with authentication, telemetry, and threat modelling early in the development cycle. “As MCP becomes foundational to modern AI integrations, securing those protocol interfaces, not just model access, must be a priority,” he said.

In an interview, Pillar Security report authors Eilon Cohen and Ariel Fogel couldn’t estimate how much revenue threat actors might have pulled in so far. But they warn that CSOs and infosec leaders had better act fast, particularly if an LLM is accessing critical data.

Their report described three components to the Bizarre Bazaar campaign:

  • the scanner: a distributed bot infrastructure that systematically probes the internet for exposed AI endpoints. Every exposed Ollama instance, every unauthenticated vLLM server, every accessible MCP endpoint gets cataloged. Once an endpoint appears in scan results, exploitation attempts begin within hours;
  • the validator: Once scanners identify targets, infrastructure tied to an alleged criminal site validates the endpoints through API testing. During a concentrated operational window, the attacker tested placeholder API keys, enumerated model capabilities and assessed response quality;
  • the marketplace: Discounted access to 30+ LLM providers is being sold on a site called The Unified LLM API Gateway. It’s hosted on bulletproof infrastructure in the Netherlands and marketed on Discord and Telegram.

So far, the researchers said, those buying access appear to be people building their own AI infrastructure and trying to save money, as well as people involved in online gaming.

Threat actors may not only be stealing AI access from fully developed applications, the researchers added. A developer trying to prototype an app, who, through carelessness, doesn’t secure a server, could be victimized through credential theft as well.

Joseph Steinberg, a US-based AI and cybersecurity expert, said the report is another illustration of how new technology like artificial intelligence creates new risks and the need for new security solutions beyond the traditional IT controls.

CSOs need to ask themselves if their organization has the skills needed to safely deploy and protect an AI project, or whether the work should be outsourced to a provider with the needed expertise.

Mitigation

Pillar Security said CSOs with externally-facing LLMs and MCP servers should:

  • enable authentication on all LLM endpoints. Requiring authentication eliminates opportunistic attacks. Organizations should verify that Ollama, vLLM, and similar services require valid credentials for all requests;
  • audit MCP server exposure. MCP servers must never be directly accessible from the internet. Verify firewall rules, review cloud security groups, confirm authentication requirements;
  • block known malicious infrastructure.  Add the 204.76.203.0/24 subnet to deny lists. For the MCP reconnaissance campaign, block AS135377 ranges;
  • implement rate limiting. Stop burst exploitation attempts. Deploy WAF/CDN rules for AI-specific traffic patterns;
  • audit production chatbot exposure. Every customer-facing chatbot, sales assistant, and internal AI agent must implement security controls to prevent abuse.

Don’t give up

Despite the number of news stories in the past year about AI vulnerabilities, Meghu said the answer is not to give up on AI, but to keep strict controls on its usage. “Do not just ban it, bring it into the light and help your users understand the risk, as well as work on ways for them to use AI/LLM in a safe way that benefits the business,” he advised.

“It is probably time to have dedicated training on AI use and risk,” he added. “Make sure you take feedback from users on how they want to interact with an AI service and make sure you support and get ahead of it. Just banning it sends users into a shadow IT realm, and the impact from this is too frightening to risk people hiding it. Embrace and make it part of your communications and planning with your employees.”

Read More