Key Takeaways:
- AI connectors rewire the risk profile of AI deployments by creating data pipelines, elevating exposure in ways not addressed by traditional SaaS terms or even existing AI contracts.
- Connector-driven access may fall outside standard AI contract definitions like “Input,” requiring updated language to avoid ambiguity around ownership, use, and liability.
- Because connectors can expand system access without expanding contractual protections, post-contract governance, especially around permissions, logging, and internal enablement, is now a legal necessity.

As generative AI tools become more embedded in enterprise workflows, the focus is shifting from choosing a model to configuring its deployment, including whether and how to enable connectors.
These integrations link AI tools to your data sources like Google Drive and GitHub, promising productivity and personalization by tapping into your institutional knowledge. But they also introduce legal risks that fall outside traditional SaaS contracts and even the AI enterprise agreements you’ve already negotiated.
This article explores how connectors reshape the risk profile of AI deployments, what to look for in your contracts, and how to mitigate emerging liabilities.
How Connectors Change the Risk Landscape
Unlike the AI model prompt-and-response setup, connectors create a real-time data channel, allowing AI models to retrieve, reference, and in some cases, store your enterprise data. This elevates the model’s role from passive tool to active integrator, raising both technical and legal stakes. Vendors emphasize the upside: OpenAI calls connectors “bridges between your data and workflows,” and Anthropic frames them as “context providers.” While these benefits may hold, the legal reality is complex.
Case Study: OpenAI Connectors
In June 2025, OpenAI released ChatGPT Connectors, which serves as an example of the legal issues to watch.
Connectors and Beta Status
Until OpenAI Business’s announcement on August 28, 2025, connectors were in “beta” status. Under the OpenAI Services Agreement (for APIs, Enterprise, Team) (“OpenAI Business Terms”), beta features carry no contractual liability: “OPENAI WILL HAVE NO LIABILITY ARISING OUT OF OR IN CONNECTION WITH BETA SERVICES – USE AT YOUR OWN RISK.” And further, OpenAI Business Terms state that beta services “have not been subjected to the same Security Measures and auditing as the Services.” In short, while in beta status, ChatGPT connectors both increased access to internal data and lacked the safeguards and commitments you might expect.
What You Should Do:
- Before enabling a connector (or any new model feature) for an LLM, confirm (1) whether it is in beta and (2) how the contract treats beta features.
- If beta services are excluded from coverage, consider (1) using a model that offers stronger protections for a similar feature or (2) delaying deployment until the feature is out of beta.
- If these aren’t options for you, seek to edit the language by requesting that certain baseline protections apply to connectors or ideally beta features more broadly.
Suggested sample language: OpenAI will use commercially reasonable efforts to maintain data confidentiality, security, and integrity in its Beta Services. OpenAI’s waiver of liability under Section 12.3(e) does not apply to breaches of Section 5 (Security and Privacy) or to any obligations under a mutually executed Data Processing Agreement.
Redefining “Input” to Capture Connector Data
At this point, we have a solid understanding of what to look for in an AI tool contract to determine how “Input” is handled, including who owns it and what AI providers can and cannot do with it. But connectors require us to revisit the definition of “Input.” OpenAI Business Terms define it as “Customer and Customer’s End Users input to the Services,” but it’s ambiguous whether content accessed via a connector, like a Google Sheet, counts as being “inputted” to the Services.
OpenAI’s marketing states that connected data isn’t used for model training, but this isn’t reflected in the contract. And while the marketing language excludes connector data from model training, it doesn’t go as far as the OpenAI Business Terms which prohibit use of Customer Content to develop or improve the Services without consent.
What You Should Do:
- Clarify in the contract that data accessed via connectors counts as Input.
Suggested language: Customer and Customer’s End Users input to the Services, including data uploaded by Customer or retrieved from connected services via authorized integrations.
Third-Party Services and the Responsibility Gap
OpenAI Business Terms draw a common SaaS distinction: it is responsible for its own “Services” and third-party platforms available through the OpenAI Services are responsible for theirs. However, connectors, which facilitate data flow between OpenAI Services and third-party, may fall into a gray area. If an incident arises from the connection itself (e.g., data exposure due to inadequate authentication or insecure transmission), it is unclear who bears responsibility. The connector may not be expressly included in either party’s scope of responsibility, creating a potential accountability gap in the contract. While APIs have long posed similar ownership gaps, AI connectors heighten the risk by moving larger volumes of unstructured, sensitive data through less transparent processes.
What You Should Do:
- Clarify ownership and accountability of the connection layer. Request transparency, including documentation of how integrations work and what safeguards are in place.
- Push for limited accountability in the contract from the AI vendor if it enables the connector, especially around authentication, access controls, and data handling.
Key Technical Distinctions That Impact Risk Analysis
Understanding how a connector is configured is also critical to evaluating legal exposure. ChatGPT supports three main connector types:
- Non-Synced Connectors (Chat Search): These run real-time queries of your underlying data sources without persistently storing your data. Depending on your settings, your team members may be able to enable the connector directly in their ChatGPT account without any specific legal/IT approval. Or they may request that your workspace admin (who may not understand the legal and security implications of connectors) enable a connector.
- Synced Connectors: These connectors scan and index selected files or folders into a persistent knowledge base hosted by OpenAI (Azure), letting ChatGPT access the content across sessions without re-querying the original source. This introduces another data repository susceptible to unauthorized access, as well as retention, deletion, and cross-border transfer concerns.
- Custom Connectors (via Model Context Protocol): These are bespoke integrations created by your organization to connect ChatGPT with your internal systems (not already established by OpenAI) using defined endpoints and schemas. Because you control the integration, you also own the risk, including misconfigured permissions, lack of audit logging, and potential exposure of sensitive data.
Post-Contract Governance: Internal vs. External Risk
When contractual protections fall short (and even when they don’t!), internal policies and technical controls become essential. These can be organized around two primary risk categories:
Risk | Description | Technical & Governance Recommendations |
Internal Data Leakage | Sensitive internal data may be improperly exposed to unauthorized employees due to misconfigured underlying source-system permissions. Connectors inherit existing access rights—useful if correct, risky if not. | – Audit file and folder permissions of source system before enabling connectors – Create “connector-safe” source system directories with low-risk content – Use basic connectors which allow you to select data more granularly from source system – Restrict enablement of connectors to admins – Disable ChatGPT’s Memory – Monitor logs and perform regular access reviews of prompts and responses |
External Data Leakage | Proprietary or confidential company, client, business partner or employee data may be accessed without authorization. | – Confirm DPAs and customer contracts permit connector-based AI use – Use non-synched connectors – Use OpenAI’s Zero Data Retention APIs – Vet custom connectors for security, logging, and revocation – Spot-audit outputs for data misuse, hallucination or bias |
AI connectors offer powerful new functionality, but they also introduce new legal and operational complexity. Before enabling them, lawyers and contract professionals should confirm their production status, review and revisit key AI tool definitions, and map the full data lifecycle to ensure there is accountability at all touch points. Where contracts can’t be updated, cross-functional governance, especially from legal, security, and data teams, is critical to managing connector-related risk.
Don’t treat it as boilerplate. Build it to reflect how AI actually works.
For more on AI and Contracts, check out my full column here.
Disclaimer: This article references the production status, definitions, and contractual language associated with specific AI tools as of the time of writing. These features, and their associated legal terms, are subject to change by the vendors at any time. Before enabling connectors or relying on this guidance, readers should review the then-current product documentation and terms of service.
The post Connect at Your Own Risk: Contracting for AI Connectors appeared first on Contract Nerds.