In July 2025, McDonald’s had an unexpected problem on the menu, one involving McHire, its AI-powered platform used to recruit and screen job applicants. The system, developed by Paradox.ai, featured a rookie-level security flaw: the backend for restaurant operators accepted “123456” as both username and password, and lacked multi-factor authentication. As a result, the personal data of around 64 million applicants was in danger. Luckily, the flaw was uncovered by security researchers Ian Carroll and Sam Curry, who notified the company.
With organizations rushing to deploy AI tools without fully auditing them, incidents like this are not uncommon. AI adoption is moving faster than AI security and governance, according to an IBM report. Last year, 13% of organizations reported breaches involving AI models or applications, while another 8% said they don’t even know whether those systems have been compromised.
And insurers know that. Many have tightened policy language, raised premiums, and carved out explicit exclusions for certain AI-related incidents, an effort that aims to limit exposure to risks that are poorly understood. A survey by Delinea found that 42% of respondents said their cyber insurance policies now include exclusions tied to AI misuse and liability.
Yet the picture is not entirely one-sided. Insurers are also rewarding stronger defenses: 86% of organizations say they have received premium discounts or credits for using AI-based security tools that bolster their security posture.
“AI is both a risk and an opportunity,” says Nate Spurrier, vice president of insurance and counsel strategy at GuidePoint Security.
Cyber insurers are changing how they judge risk
As AI becomes more deeply embedded across business operations — and increasingly exploited by attackers — cyber insurers are rethinking how they evaluate risk. Many are now moving beyond checkbox questionnaires and self-attestations, asking for evidence that security controls are actively monitored, tested and enforced. According to the Delinea report, 77% of insurers now require formal reviews by internal and IT security teams before issuing or renewing coverage, up from 56% a year ago.
But even those reviews are no longer enough on their own. “Leading cyber insurers have moved away from moment-in-time application forms toward continuous assessment of an organization’s attack surface and controls,” says Michael Phillips, Coalition’s head of global cyber portfolio underwriting.
In addition to underwriting and settling claims, Coalition also bundles cybersecurity services with its cyber insurance offerings. Policyholders gain access to tools that continuously monitor internet-facing systems for vulnerabilities and alerts, alongside expert guidance and threat intelligence. The idea is to reduce the frequency and severity of claims, by linking a company’s security posture directly to its insurance coverage.
And as AI touches many corners of modern business operations, that heightened scrutiny now extends to how companies use and govern the technology. “Insurance carriers are wanting to know how policyholders and applicants are using AI within their organization: what controls are in place, how AI is being used and for what specific tasks, who is allowed to use it, and whether it’s simply an efficiency tool or a core part of the end solution being offered to clients,” says Spurrier.
Changes to coverage and language
Now that AI is everywhere, insurers are rewriting their contracts to be much more specific about what’s covered and what’s not. Some have introduced affirmative AI endorsements, others have added exclusions, because AI risks can be unpredictable and potentially large-scale, and insurers don’t want to be on the hook for losses they can’t accurately price.
Crafting the right policy language for a fast-evolving technology is a complex task. “Right now, insurers don’t have enough claims data to fully understand what language and components of AI risk should be targeted, so some carriers are using broad exclusions out of caution,” Spurrier says.
Yet that caution can be detrimental for organizations. “AI is now an expected component of a successful cyber attack, and it’s not always easy to discern what was created by AI or not,” says Philips. “If a policy excludes any AI‑related loss, an insurer could argue that a classic ransomware claim is out of scope simply because AI was used as part of the attack process.”
The issue is compounded by how policies have evolved. Many were written before generative AI went mainstream. Insurers later added AI-related language, layering new terms onto older contracts. This patchwork approach can create confusion. “If that wording isn’t explained clearly, policyholders may assume they have the same protection as before, but they do not,” Philips says.
Businesses and their brokers need to read policy language closely and talk through how it would actually work in practice. That means discussing specific AI-related scenarios with their brokers before renewal and seeing how they might affect different types of coverage.
“One scenario may not impact some lines of insurance and then show up as excluded in another line of insurance,” Spurrier says. “The time to clarify your AI coverage isn’t during a claim, but during renewal and other pre-incident scenarios.”
Bringing costs down for companies
Some companies that prove they have a good security posture can lower their insurance costs. To do that, they need to demonstrate that they’re using AI-driven tools to spot anomalies early or cut response times from hours to minutes. “For insurers, that means smaller claims and faster recovery,” Spurrier says.
Discounts are usually offered to businesses that have strong, round-the-clock security in place. “Detection solutions like EDR (endpoint detection and response) are now widely expected by insurance carriers, and the next step is to continuously monitor the alerts generated so that action can be taken quickly,” Spurrier adds.
In the near future, AI-powered defenses may become mandatory for coverage, much like multi-factor authentication and endpoint detection and response tools are today. This means that companies that lag behind may find themselves at a disadvantage. “If you’re relying on legacy tools, expect higher premiums or limited coverage,” he says.