Shadow AI, the secret, unapproved use of AI by employees, isn’t going away. In fact, workers are getting more brazen, and their employers often don’t seem to care.

In a new BlackFog survey, nearly half (49%) of workers admit to adopting AI tools without employer approval, many using free versions with which they are freely sharing sensitive enterprise data.

But perhaps more alarmingly, a wide majority — 69% of presidents and C-suite members and 66% of directors and senior VPs — seem to be OK with this, prioritizing speed over privacy as they race to adopt AI tools.

“The efficiency gains and personnel cost savings are too large to ignore, and override any security concerns,” said Darren Williams, BlackFog founder and CEO. The research is a “stark indication” of the wide use of unapproved AI tools in the enterprise, and also the “level of risk tolerance amongst employees and senior leaders.”

Shadow AI by the numbers

The survey of 2,000 workers at companies with more than 500 employees found that shadow AI is rampant, and not much is being done to rein it in. Of those surveyed, 86% said they use AI on a weekly basis at work, the most common use cases being in technical support, sales (such as email marketing), and contracts. But more than one-third of them admitted to using the free versions of company-approved tools, raising questions about where sensitive corporate data is being stored and processed.

Furthermore:

  • 51% have connected AI tools to work systems or apps without the approval or knowledge of IT;
  • 63% believe it’s acceptable to use AI when there is no corporate-approved option or IT oversight;
  • 60% say speed is worth the security risk;
  • 21% think employers will simply “turn a blind eye” as long as they’re getting their work done.

And the C-suite’s own use of shadow tools? That’s a little more difficult to gauge; they’re close-lipped about it, indicating a wider problem, Williams noted. “Senior executives often don’t want to admit they are using AI,” he said. Instead, they’re trying to prove how valuable they are without disclosing their own AI use.

Just like workers elsewhere in the enterprise, “senior leaders are able to get more done faster than ever” with AI, he noted. For instance, he said, “you can draft a legal contract in seconds and get a lawyer to review, rather than spend weeks drafting and redrafting using external counsel.”

Concerningly, when it comes to the tools workers are using, free versions tend to be the most popular. More than half (58%) of employees using non-approved tools rely on free versions, and 34% of those working at companies that do allow AI tools are also opting for the free version.

“Non-paid is almost certainly worse because of the licensing and business models around them,” said Williams. “There is always a cost to using free tools; in this case it’s the value of your data.”

And employees are not shy about loading sensitive data into unsanctioned AI tools: 33% admit to sharing enterprise research or datasets; 27% to revealing employee data (such as salary or performance tracking); 23% to inputting company financial information.

This becomes dangerous because virtually all free tools use ingested data to train their models, and some of the lower-tiered paid tools do, too, Williams pointed out. “And,” he said, “you cannot get this information back.” Paid enterprise plans typically allow companies to turn off training on their data, but not always. Admins must check this with their large language model (LLM) providers.

“The big problem is the loss of intellectual property,” said Williams. And threat actors can get access to this information to profile and target an organization, breach their networks, and exfiltrate confidential data for extortion.

“The more data that is disclosed to LLMs, the more information is available [to threat actors] to build a better profile,” Williams noted.

Enterprises must build policies around AI use

Many CEOs have been mandating AI adoption and are allocating capital throughout the business for this purpose, Williams noted. Executives are looking for cost savings as a strategic advantage and a way to quickly return shareholder value.

Unfortunately, security is an afterthought, he said. “Many companies have just chosen to ignore the problem, and have decided not to create a policy or see the value in paying for the technology, which is a very big mistake.”

Organizations are “flying blind,” and 99% have no way of even knowing what is happening in their environments because there are no products in place to measure it, he observed. This should raise serious red flags for security teams, and there must be greater oversight and visibility into these security blind spots.

Williams advised enterprises to audit what is going on inside their systems, measure the scope of the problem, define policies around AI use, and adopt governance frameworks to control it.

Further, employees must be made aware of the risks. Many, CISOs included, don’t actually understand the extent of the problem and its broader implications. “Education is essential and doesn’t require a lot of work,” said Williams. On the other hand, implementing a policy and framework does, and enterprises first need to decide what risks they are willing to live with.

Ultimately, he said, we are navigating an unprecedented time in history, with new technology advancing at such a rapid pace that the technologists themselves don’t even know where it is going. Enterprises must quickly understand the implications, and use AI responsibly to gain a strategic advantage.

“Just as the industrial revolution and the internet changed the way we worked, AI is doing the same,” said Williams. “In fact, we expect this to be an even bigger shift than either of those transitions.”

This article originally appeared on CIO.com.

Read More