Key Takeaways:

  • AI indemnity is evolving beyond IP claims to address broader model risks.
  • Traditional ideas of fault don’t align with AI’s probabilistic and shared-risk nature.
  • Contracts must evolve to address AI-specific harms, liability caps, and use-case risks.

The AI Output Problem: Rethinking Indemnity in the Age of Generative AI by Laura Belmont

The world of generative AI contracting is still taking shape, but early patterns are emerging. One notable trend is that major large language model (LLM) providers, such as OpenAI and Anthropic, are offering AI indemnity for third-party intellectual property (IP) infringement claims.

This signals a recognition that model developers must bear some baseline accountability.

But what about everything else?

LLMs don’t just raise IP issues—they create an entirely new risk landscape. What happens when a model hallucinates defamatory statements, reveals sensitive health information, embeds bias into hiring decisions, or offers dangerously inaccurate advice?

When outputs trigger third-party claims, who should bear the risk?

The vendor that built and trained the model. The customer fine-tuned and deployed it. And the end user submitted the prompt. When no single actor has full control, traditional ideas about fault and responsibility start to break down.

These aren’t theoretical questions. They’re reshaping how lawyers and contract professionals approach indemnity clauses in AI deals.

The Problem of Fault in a Probabilistic World

Indemnity is rooted in causation. One party protects another from harm it caused. But AI systems operate in non-deterministic, probabilistic ways. When a chatbot generates harmful content, the “cause” isn’t always a clear coding error. It could be the result of model architecture, training data, prompt design, deployment context, or a combination of factors.

Without clear legal standards for attributing fault, parties are left to allocate responsibility through contract.

Training Data and the Black Box Problem

IP indemnity is gaining traction but remains far from standardized. While some leading vendors indemnify for certain output-related IP claims, others stay silent or carve out key exceptions. One major fault line lies in training data.

Generative models are often trained on massive datasets scraped from the web or acquired through opaque third parties. That opens the door to claims that:

  • The model was trained on copyrighted or proprietary material;
  • It constitutes a derivative work of protected content; or
  • The data was collected in violation of privacy laws or platform terms.

Customers should clarify whether IP indemnity includes training data, push for representations about lawful sourcing, and watch for exclusions tied to open-source or fine-tuned models. 

Vendors, on the other hand, may want to limit indemnity to known model versions, use clear disclaimers about third-party data, and explore alternatives like scoped warranties or insurance-backed protections.

Learn More: Key Contractual Considerations for Onboarding GenAI Tools + Free GenAI Tool Intake Questionnaire

Third-Party Claims from Model Output

Beyond IP, the output of generative AI can create a host of legal liabilities. Models may:

  • Defame individuals or organizations;
  • Reveal private or regulated information; or
  • Violate anti-discrimination, consumer protection, or other laws and regulations.

Here, indemnity becomes even more contested. Vendors often argue that they can’t control what users ask or how models are deployed, and thus shouldn’t be liable for the outputs. But enterprise customers, especially in regulated industries, counter that they shouldn’t absorb the full risk when they lack insight into the model’s inner workings.

Some contracts are starting to reflect this tension through hybrid indemnity models, where vendors cover core model-level risks (e.g., training data, embedded bugs) and customers take responsibility for how models are prompted, fine-tuned or deployed.

Customers may want to negotiate for indemnity in high-risk contexts like hiring, finance or healthcare and request vendor support through notice and cooperation obligations. Vendors, meanwhile, can limit output-related indemnity to authorized use cases, include carve-outs for misuse or prompt injection, and encourage safer deployments through clearer documentation and disclaimers.

 A Word on Disclaimers and the Risk of Hollow Indemnity

To limit exposure, many vendors describe their tools as “assistive technologies” and place responsibility for use squarely on the customer. These disclaimers may undermine the practical value of indemnity.

If the vendor disclaims all responsibility for outputs and the customer must verify everything, what meaningful protection is left? The challenge is finding a balance between fair risk-sharing and the operational realities of how AI tools are actually used.

Rethinking the Liability Cap

These same tensions spill into limitation of liability clauses. Buyers increasingly seek carve-outs or elevated caps for AI-related harms, while providers argue that probabilistic systems make uncapped or super-capped liability untenable.

Some emerging approaches include:

  • Output-based carve-outs for third-party claims arising from generated content;
  • Supercaps for high-risk use cases like legal, financial, or healthcare applications;
  • Usage-based caps that vary depending on whether the model is base, fine-tuned, or used with high-risk prompts.

These frameworks signal a shift away from one-size-fits-all SaaS contracting toward more adaptive, scenario-specific risk terms.

Drafting for the Now 

Generative AI systems don’t just behave unpredictably; they also blur the lines of responsibility. Unlike legacy software, AI involves a web of actors, including the developer who trained the model, the customer who deployed it, and the end user who shaped its output through prompts.

In this world, risk is both probabilistic and shared. Contract terms must evolve to reflect complexity. That means considering:

  • Tailoring indemnity to explicitly include (or exclude) AI-specific harms;
  • Applying tiered or use-case-based liability caps, particularly in sensitive domains like hiring, healthcare, or legal services;
  • Requiring human oversight and intervention for high-risk applications;
  • Triggering contractual reviews or model revalidation after major updates; and
  • Exploring whether insurance can serve as a safety net where contract clauses fall short.

Responsible AI adoption requires balancing innovation with accountability.


For more on AI and Contracts, check out my full column here.

The post The AI Output Problem: Rethinking Indemnity in the Age of Generative AI appeared first on Contract Nerds.

Read More