Key Takeaways:
- If your organization is considering using a large language model (LLM) in their production environment, there are important considerations to remember.
- The LLMs terms offered by vendors may appear like other SaaS agreements, but certain terms deserve more scrutiny in the context of LLMs.
- Widespread use of LLMs is still relatively new and fraught with unknown risks, so vendors are shifting the risks to customers.

Since OpenAI released Chat GPT 3.5 in November 2022, the use of generative AI has evolved from a novelty to a productive tool used by many. However, AI vendors don’t have the same level of comfort with their AI offerings as they do with their SaaS products.
As a result, AI vendors are not as inclined to bear the risks associated with their products and we are seeing customers taking on more risks than they have previously tolerated.
In order to understand this area more deeply, I completed an independent and comprehensive benchmark of the top eight LLM vendor contracts, including OpenAI, Meta, Anthropic, and xAI. In this article, I share my findings and draw on my own experience negotiating SaaS agreements and reviewing LLM terms when necessary for my practice. For those wanting to use an LLM or representing clients who want to use an LLM, keep these key findings in mind to help inform you of important risks and considerations.
Key Customer Risks in LLM Terms
Vendor indemnification for third-party IP infringement claims has long been a staple of SaaS contracts for commercially available products. But it took years of public pressure and high-profile lawsuits for LLM pioneers like OpenAI to relent and agree to indemnify their users for IP infringement claims. Only a handful of other LLM vendors have followed suit, and more vendors than not still hold back.
Other terms commonly found in software contracts, such as liabilities and warranties, dramatically shift risks to customers in AI vendor LLM terms. The reality is that these products are still evolving and often unreliable (as those attorneys who asked LLMs to write briefs painfully learned). The contracts read more like beta agreements, with nearly every contract containing an “AS-IS” disclaimer.
If your organization is planning to use an LLM, ensure they are using these products with their eyes wide open about the risks involved and the limited (if any) remedies they may have if things go sideways. Here’s a non-exhaustive list of some of the most important LLM terms to consider when reviewing vendor contracts.
1. Data Use
Like any product accessed via the internet, it is important to know how the vendor may use your data. This is particularly important when your client may use their internal, confidential, and/or proprietary information with the tool.
Most vendors reserve the right to use customer data to provide, maintain, and develop the service, including the right to develop new products. However, not all vendors disclose whether their use includes training their models.
Vendors that specify whether they intend to use your data to train their models will sometimes offer an “opt-in” or “opt-out” approach. But usually, those options are reserved only for enterprise subscriptions. Further, vendors also warn that opting out will inhibit the effectiveness of the model.
2. Intellectual Property Rights
This one is a no-brainer–your client must ensure that entering their information into an LLM does not compromise any intellectual property rights, either their own or the rights of a third party. It is also paramount to know what rights the user has in the output of the tool.
The standard approach taken by vendors is to say that the customer retains the IP in their content, and as between the customer and the vendor, the customer owns the output, minus any vendor IP contained within.
Why do vendors say “as between the parties”? To insulate themselves against any third-party claims of ownership.
Vendors will also reserve their ownership rights to the service and sometimes include a reservation of the right to derivative works. This is important to take into consideration if your client intends on creating a “tuned” model.
Learn More: Understand how to contract with AI companies.
3. Representations & Warranties
Despite many of the most popular LLMs today being sold as commercial products, the warranties LLM vendors are willing to make are sparse.
Vendors mostly disclaim all warranties with an “AS-IS” statement. Some vendors will warrant that the LLM will conform with the applicable documentation, but that is essentially the only warranty available.
And some LLM vendors will caveat their disclaimer with a statement that warranties are disclaimed “unless required by applicable law,” but that won’t mean much if the applicable laws allow disclaimers.
4. Limitation of Liabilities
Knowing the remedies available from LLM providers is obviously a key component of any contract evaluation, and in the LLM context the landscape isn’t great.
Based on my audit, most LLMs go with a limitation of liability cap set at an amount of fees paid prior to the date of the claim or $100-$200. With LLMs that are free or have a small monthly subscription fee, the resulting ceiling on damages is rather low.
Other vendors are bolder and will disclaim any and all liability. And I’m also seeing that the risks tied to using the output are explicitly made the customer’s responsibility.
5. Indemnification
A few vendors have taken the important step forward of agreeing to indemnify their customers for any IP infringement caused by the service. Albeit with some broad carve-outs. These can leave customers less protected than they think.
- The few vendors that do offer indemnities caveat their obligations in a number of ways, the most common being if the customer “knew or should have known” that the output would be infringing. In certain situations, this could become quite subjective and shift the burden of proof to the customer.
- Another caveat disclaims coverage if the output is mixed with another third-party product or service, and the result is infringing. Since the potential uses of LLM output are growing exponentially, use cases where the output is used in combination with other technologies are more likely to become the default, not the exception.
- However, most vendors will require that the customer indemnifies them for a broad category of claims arising from use of the service, not limited to IP infringement. This is never subject to a dollar cap.
6. Suspension and Termination
Vendors will reserve broad suspension and termination rights, arising from a breach of the terms, a failure to pay, or if required by law. A few things to keep in mind:
- Some vendors will make termination subject to a “material” breach, but some reserve termination for a breach of “any” term.
- If your client plans to use an LLM in a productive environment, suddenly losing access to the service could be very disruptive.
- Vendors also reserve the right to modify or remove features at any time without warning, so becoming dependent upon a particular feature comes with additional risk.
Use of LLMs is going to continue to grow exponentially, so it is important to be aware of the risks and limited remedies that come with LLM terms within vendor contracts. Hopefully as these products become more reliable, LLM providers will begin to put more skin in the game. But for now, caution and careful contract review are critical.
The post Navigating the LLM Contract Jungle: A Lawyer’s Findings From an LLM Terms Audit appeared first on Contract Nerds.