The UK Legal IT Innovators Group (Litig) today (22 October) announced the launch of an AI Transparency Charter, under which legal organisations will commit to ensuring that products and services that use generative AI are developed in a safe, ethical, and transparent way.
The Charter is supported by a Transparency Statement – a standardised template, inspired by Google’s AI “model cards,” under which providers of legal AI tools will set out details of their technology, use cases, data, testing methods, and ethical safeguards at the product or service level.
The Charter and Transparency Statement were designed by a Litig working group consisting of law firms and both large and small legal tech suppliers. Feedback was gathered from across the Litig membership and the wider Litig AI benchmarking community, comprised of about 300 organisations including law firms, legal tech suppliers, universities and regulators. It forms part of the Litig AI Benchmark Initiative started in July 2024 with participation from a broad community across the legal industry.
The Charter notably steers clear of asking signatories to disclose commercially or legally sensitive information, which would discourage participation.
Litig is inviting AI vendors to sign up to the Transparency Charter here. The goal is to establish an industry-wide benchmark for AI trust and accountability, enabling firms to embrace innovation while safeguarding ethical and professional standards.

John Craske, chief knowledge and innovation officer at CMS, who founded the benchmarking initiative, told Legal IT Insider that he expects good participation, commenting: “All of the organisations on the working group are keen to get their organisation to sign up, but of course they need to work through that.”
Voluntary charters inevitably live and die on the extent to which they receive backing, and Craske said: “We do need a critical mass to make it a success – and that’s where the community comes in. If you are a legal tech supplier, or a law firm providing legal AI products to the market, please sign up!”
In terms of how Litig will monitor compliance with the Charter, Craske said: “It’s something that the Working Group have considered and will keep under review. At this stage, we don’t have the resource to monitor or audit compliance, so the Charter is a voluntary code of conduct. We will of course respond if we are made aware of anything that goes against the Charter. We are hoping for widespread take-up which would make this a good problem to have!”
While the Charter steers clear of asking signatories to disclose commercially or legally sensitive information, the transparency requirements may raise concerns around whether it does just that – how can organisations be transparent without sharing competitive data? Craske told us: “This was a key point the working group discussed. The Transparency Charter is a set of behaviours to commit to and doesn’t require any confidential information to be disclosed. The Transparency Statement is where the rubber hits the road and our hope is that over time legal tech suppliers will get more comfortable with being open about this information and publish them, but – at the least – they can provide it to prospective customers as part of a sale process.”
As things currently stand, the Charter will come down to organisations doing the right thing for the common good. Craske said: “It’s very much a question of trust and understanding. Transparency is the key foundation of trust. We can’t build sustainable adoption and change without transparency and trust.”
The Charter’s core commitments include:
- Transparency – clear, open communication on how AI is used in relevant legal services and products.
- Accuracy & Testing – evidence-backed claims on performance, supported by testing data and methods.
- Bias & Ethics – proactive measures to identify, address, and mitigate risks such as bias.
- Use Cases & Limitations – honest disclosure of where AI works well, and where it should not be relied upon.
- Environmental Impact – commitments to track and reduce the carbon and resource footprint of AI.
- Regulation & Standards – alignment with industry standards and compliance with the EU AI Act and other frameworks.
The Charter is also accompanied by Litig AI use case frameworks – practical templates to help firms, suppliers and other organisations evaluate and define AI uses cases, which you can access here: https://www.litig.org/ai/introduction. There is also a glossary of terminology around AI, including testing and benchmarking. Information about other Benchmarks, Evaluations, Due Diligence questions and AI Regulation.
Together, these resources create a comprehensive foundation for legal professionals to evaluate, adopt, and govern AI responsibly.

“To drive sustainable and responsible adoption of AI, the legal industry must have confidence and trust in the tools they are using. The Litig AI Transparency Charter and supporting resources provide practical, workable building blocks that firms and AI providers can use to build trust and confidence and ensure that AI is used responsibly, without compromising standards expected by law firms, clients, and society,” said David Wood, Litig director and head of portfolio management at Simmons Wavelength.
The post Litig unveils AI Transparency Charter to promote responsible AI adoption in the legal sector appeared first on Legal IT Insider.