Legal AI adoption has accelerated rapidly over the past two years, yet many law firms are discovering that progress stalls when pilots move into production. In a recent Talking Tech webinar hosted by Legal IT Insider, Errol Rodericks, marketing director at Silicon Valley data management company Denodo, unpacked why legal AI initiatives so often fail — and why the problem is rarely the technology itself.

The core issue, Rodericks argued, is trust. While AI tools perform well in controlled pilots, they struggle in live environments where they must operate across fragmented, siloed and inconsistently governed data. Law firms may have sophisticated CRM, billing, document management and analytics systems, but these tools were designed to support individual processes, not to provide a consistent, governed enterprise-wide view of data. As a result, lawyers are forced into manual reconciliation, exporting spreadsheets and relying on yesterday’s truth — all of which undermine confidence in AI outputs.

This lack of trust is particularly acute in the legal sector. Unlike many other industries, law firms must be able to stand behind every recommendation not only at the point of decision-making, but months or years later during audits, disputes or regulatory scrutiny. If lawyers cannot trust the inputs, they simply cannot rely on the outputs. As Rodericks put it during the webinar, legal AI does not fail because models are weak — it fails because the data beneath them is not trusted.

The webinar also explored where firms feel the operational pain most sharply. Preparing pitches and panel submissions was a recurring example: client billing history, prior matter experience, regulatory insights and sector benchmarks often sit in disconnected systems. Firms lose time, responsiveness and opportunities as teams spend hours assembling data rather than advising clients.

A public-domain case study with BCLP illustrates what becomes possible when those barriers are removed and we looked at some of the takeaways of a podcast with BCLP’s senior data architecture manager Ben Legge.

Looking ahead, the discussion highlighted that emerging approaches such as agentic AI and Model Context Protocols (MCP) will only increase the pressure on data foundations. MCP can connect different data sources, but it cannot act as a filter for quality.

The clear takeaway for firms: if AI is to deliver real value, trusted data must be treated as a strategic asset — not an afterthought.

👉 Watch the full webinar replay below to hear the discussion in depth, including practical examples and lessons from both legal and financial services.

The post Why legal AI fails without trusted data: Key takeaways from our Denodo webinar appeared first on Legal IT Insider.

Read More