Key Takeaways:

  • A global legal AI benchmarking study comparing human lawyers and AI assistants on real-world contract drafting tasks found that AI assistants outperformed human lawyers in preparing reliable first contract drafts.
  • Even the best AI tools made errors in more than 1 out of 4 drafts. No human or AI tool was completely error-free.
  • Lawyers can reduce AI errors and make AI a reliable co-collaborator by applying four practical techniques grounded in benchmark findings.

4 Ways to Improve AI Contract Drafting Results by Anna Guo

AI is transforming how lawyers draft contracts, but it’s not a magic solution. While recent benchmarking shows AI can outperform humans on first drafts, even the best tools make mistakes in more than one out of four attempts. The key is learning how to work with AI strategically, harnessing its speed and capabilities while building in safeguards that catch errors before they become significant.

Benchmarking AI vs. Human Drafting Skills

LegalBenchmarks.ai recently published its second report, Benchmarking Humans & AI in Contract Drafting, comparing 13 AI tools and human lawyers across 30 real-world drafting tasks. Outputs were assessed on three dimensions: reliability, usefulness, and workflow integration.

Some results may be surprising:

  • Neither human lawyers nor AI tools produced completely error-free drafting.
  • The challenge for lawyers isn’t deciding whether to use AI, but deciding how to use it without letting errors slip through.
  • There are several techniques that can drastically improve the accuracy and usefulness of the AI drafting output.

But that still means even the strongest AI tools make mistakes. For many lawyers, especially those working outside their core practice area or junior lawyers still building judgment, this is risky.

The solution isn’t to avoid AI, but to learn how to manage the risks so lawyers can capture the benefits of using AI with contract drafting tasks while minimizing potential errors.

Based on the benchmark findings, here are four practical ways for lawyers to de-risk AI-assisted drafting before sending a draft out.

1. Identify Hidden Assumptions to Reduce Ambiguities

Key finding: When instructions (aka inputs) were ambiguous, AI tools sometimes produced outputs misaligned with the tasks.

Why it matters: AI tools don’t usually ask follow-up questions if your instructions are unclear. Instead, they tend to fill the gaps with their own assumptions. The draft it produces may look polished and authoritative, but it can quietly reflect the wrong position.

For instance, if you ask for a “standard indemnity clause,” one AI tool might default to a clause that is strongly pro-supplier, while another might lean pro-customer. Both versions look legally sound, but if your intention was to have a balanced draft, neither delivers what you needed.

Risk mitigation: Write clearer prompts. Ask the AI tool to restate your instructions and list the assumptions it is making before drafting. This simple step forces ambiguity into the open, so you can correct it before the tool starts drafting in the wrong direction.

2. Force Transparency to Prevent “Quiet Drift”

Key finding: An AI tool instructed to draft a pro-licensee clause quietly inserted pro-licensor terms. Legal task reviewers noted this would be hard for non-experts to spot given the clause was otherwise well drafted.

Why it matters: AI-generated drafts often read as fluent and authoritative, but that fluency can mask subtle shifts in meaning. A clause that looks correct on the surface may actually tilt leverage in the counterparty’s favor. These are not obvious drafting errors, they’re slight alterations in balance that can significantly alter the commercial outcome. Without transparency, these shifts may only surface later in negotiations, when they’re much more costly to correct.

Risk mitigation: Draft with negotiation in mind. Instruct the AI tool to include short notes alongside each clause explaining (1) how it supports your position, (2) what counterparty terms were left out, and (3) where possible, references to legal or market practice that justify the choice. This makes the draft less of a black box and gives you a clearer basis for review.

3. Validate with Second Opinions

Key finding: In our survey, 83% of lawyers using AI tools use two or more tools, not just one. With the best-performing AI tool at 73.3% in reliability, it’s not surprising that lawyers don’t trust one tool to carry the whole weight.

Why it matters: Contract drafting involves judgment calls, not just mechanical wording. If two tools independently land on similar language, it’s a clue that the clause reflects a common market approach, but it still needs to be checked against your intended position. If their outputs differ significantly, that’s a red flag and the issue may require closer scrutiny. Divergence isn’t a failure; it’s a useful signal that highlights where human legal judgment is most needed.

Risk mitigation: Get a second opinion. Run the same drafting task through a second AI tool, ideally one that’s built on a different foundational model (e.g., Gemini vs. GPT). Compare the outputs side-by-side. Where they converge, you’ve likely found a safer baseline. Where they diverge, that’s your cue to probe deeper.

4. Request Proactive Risk Analysis

Key finding: Every lawyer we surveyed expected AI outputs to flag material drafting risks if any existed. Yet in practice, only some AI tools flagged such risks. Human lawyers, by contrast, flagged none.

Why it matters:  Drafting risks, such as unenforceability or illegality, are important for lawyers to note and consider. The AI tool should be leveraged as your risk analyst.

Risk mitigation: Be explicit. Ask the AI: “Flag the top three risks in this draft that could (a) make it unenforceable, (b) create compliance issues, or (c) expose [the company] to liability. Explain why each matters and cite relevant sources (e.g., case law, statutes, or industry guidelines) where possible.” This shifts the AI tool from being just a drafter to acting as an early-stage risk reviewer. A lawyer still needs to validate the analysis, but it provides a running start.

The challenge for lawyers isn’t deciding whether to use AI, but deciding how to use it without letting errors slip through. These four techniques give you practical guardrails: surfacing hidden assumptions, exposing quiet shifts in meaning, cross-checking with multiple tools, and pushing the AI to flag risks. Together, they turn AI from a black-box drafter into a useful drafting co-collaborator. One that speeds you up without leaving blind spots behind.

For a deeper dive into how humans and AI performed, see Benchmarking Humans & AI in Contract Drafting.

The post 4 Ways to Improve AI Contract Drafting Results appeared first on Contract Nerds.

Read More