Key Takeaways:

  • AI tools can retain and expose confidential information without direct disclosure.
  • Confidentiality clauses should prohibit AI training, retention, and third-party use of data.
  • Include AI-specific safeguards, addenda, and monitoring in NDAs to mitigate evolving risks.

7 AI-Specific Confidentiality Language Proposals by Stacey Heller

Most NDAs and confidentiality clauses include something similar to the following prohibitory language: “Confidential information may not be disclosed to third parties without the discloser’s prior written consent.”

But if your data is ingested by your own software tool, did you really “disclose” that data to a third party?

AI LLMs might use customer data to “train” their AI. And AI models can perpetually retain confidential information embedded in their system. It may become integrated in such a way that makes it impossible to segregate, remove, destroy or return this data once training is complete.

Therefore, even if confidential data is not proactively “disclosed,” by merely using it to train AI, there’s a risk that a third party may “discover” the information simply by doing a search using the right criteria.  

For example, hypothetically, what if you use Chat GPT to improve some of your employer’s proprietary source code, by inputting the code into Chat GPT, along with the prompt: “how do I optimize this code?”. If your original source code is permanently stored in Chat GPT’s “brain”, it might be used verbatim as an answer to another question by another company in the future.  

To prevent this risk, you may want to consider updating your template confidentiality language as follows.

1. Restrict AI Tools from Storing or Retaining Your Data

Traditional contract language typically permits disclosure of confidential data to “employees” or “representatives,” but restricts sharing with unaffiliated “third parties” (without prior written consent). But you may want to ban these tools from storing or retaining any of your data, to significantly mitigate the risk of unintentional disclosure.

2. Prohibit Use in AI Training

Many AI tools train using customer data shared within the AI platform. Adding specific language to your confidentiality provisions that prohibits the use of your data or confidential information to train AI models, LLMs, or algorithms can help to ensure that sensitive data does not find its way into AI models.

3. Include Compliance and Enforcement Requirements

Emphasize the importance of accountability and proactive risk mitigation by including the right to monitor provisions, audit, get reports, require notice of known breaches. In addition, include legal remedies such as injunctive relief and financial penalties (e.g., indemnities) for noncompliance.

4. Address Third Party Risks Explicitly

Many vendors use third-party subcontractors. What if those subcontractors use AI tools? If a customer negotiates protections with their vendor, but not with the vendor’s third party’s AI tool, it is possible that the third party’s AI tool might use the customer’s data for training. Thta is, unless the subcontractor is explicitly prohibited from doing so. For example, you may insist that your vendor:

  • Disclose any third-party data processing tools they use;
  • Ensure that all third parties are subject to equivalent confidentiality and data protection standards (and are expressly restricted from using customer content for AI training);
  • Agrees not to transfer confidential information to third parties (including third party AI tools) without prior written consent; and
  • Explicitly acknowledges the remedies, financial and otherwise (including indemnity from the vendor), for breaches by these third parties and third-party tools.

5. Implement Data Classification Systems

Include a system for classifying confidential information based on different levels of sensitivity. For example, highly sensitive information such as healthcare or financial data might be completely prohibited from any interaction with AI systems, regardless of any safeguards.

6. Technical & Operational Safeguards and AI Containment Measures

Require specific technical and operational protections when confidential information is being handled, both directly in the AI system and in any relevant connected environments (e.g., data storage systems). In addition to the usual data security provisions, consider additional ones unique to AI, such as:

  • Specialized AI monitoring tools that detect risks or vulnerabilities
  • Technical controls that prevent AI systems from memorizing or reproducing specific data types

7. Create AI Confidentiality Addendums

Develop templated AI riders or addenda that can supplement existing agreements to avoid having to negotiate new or full agreements. In addition to confidentiality protections, these addenda might address accuracy standards (no hallucinations or bias), limiting the use of third party tools (including not only AI but also open source), and AI specific IP rights (e.g., who owns the prompts and the unique answers/output), among other issues.

By incorporating AI-specific language in confidentiality provisions and NDAs, businesses can better protect their confidential information, mitigate potential issues before they arise, and build trust in an increasingly AI-driven world. 

The post 7 AI-Specific Confidentiality Clauses appeared first on Contract Nerds.

Read More