Why Depth, Not Access to Models, Will Define Legal Tech’s Future
When everyone has access to the same AI models, surface level differentiation disappears. The future of legal tech belongs to depth, not wrappers or workflows, but systems built to understand contracts, decisions and institutional context across the full lifecycle.
By: Eleanor Lightbody, CEO
As AI becomes universal, value in legal-specific artificial intelligence shifts from managing documents and automating process to understanding context, judgement, and decisions across the contract lifecycle.
This moment exposes the limits of CLMs as we know them and the fragility of wrapper-based legal AI.
Depth wins
General-purpose AI has commoditized task automation and rendered point solutions obsolete. Summarisation, clause extraction, drafting, and review are no longer scarce capabilities. When everyone has access to the same underlying intelligence, surface-level differentiation erodes quickly.
What survives is depth: systems that understand contracts, workflows and institutional context end to end.
For years, legal technology has been built around managing documents and processes. Although AI doesn’t make those things disappear, it radically changes where value is created. Managing contracts is no longer enough: AI needs to understand them.
Why CLM breaks in an AI-first world
Traditional CLMs were designed around storage and workflows. They assume contracts are artifacts to be routed and archived.
That premise worked when the primary challenge was coordination but doesn’t come close when the primary challenge is judgement.
Contracts are not static records. They encode negotiated positions, accepted risks, commercial trade-offs, and regulatory constraints. They reflect how an organisation actually operates – not just what it agreed to on paper, but why.
CLMs struggle here because they were never designed to reason about decisions or retain the context behind them. They manage lifecycle stages, not institutional knowledge. As a result, critical understanding is repeatedly lost and reconstructed across teams and deals over time.
In an AI-first world, that limitation becomes structural. A system built to manage documents cannot easily become a system built to understand decisions.
Why AI wrappers collapse at scale
The recent wave of AI tools layered on top of general-purpose models has delivered real gains. Legal teams can move faster, surface issues earlier, and reduce manual effort.
But these tools remain episodic.
Each interaction is a moment in time. The user must reintroduce context and re-explain standards. Oversight has to be constant to ensure consistency and trust. Essentially, the system doesn’t remember, the user does.
Although this approach can be useful for individual tasks, it breaks down at enterprise scale.
As general-purpose AI becomes more capable and accessible, the marginal value of simply adding an AI interface to existing tools declines quickly. Wrapping intelligence around document management or workflow software creates convenience but not durability.
Wrappers don’t fail because the models are weak. They fail because they lack depth.
What depth actually means in legal AI
Depth is not about using a particular model, adding more features, or exposing smarter prompts.
Depth means building a system that understands, remembers, and acts across the full contract lifecycle. Depth is structural. It is built into the system.
In legal AI, depth means:
- Proprietary data and feedback loops: learning from real contracts, negotiations, and outcomes over time, not just analysing documents in isolation
- Persistent organisational memory: retaining how decisions were made, which positions were accepted, and why, across thousands of agreements
- Integrations and distribution: operating inside the tools and workflows legal teams already rely on, so intelligence is applied where work actually happens
- Governance and auditability by design: with transparency, traceability, and controls that regulated teams require
- Measurable outcomes: demonstrable improvements in cycle time, risk exposure, consistency, and cost, not just better answers
This kind of depth cannot be configured through a plugin or bolted onto a document system. It requires an intelligence layer designed to accumulate understanding, apply judgement consistently, and improve with every interaction. It emerges from sustained investment in legal expertise and specialised data.
What defines the next generation of legal AI
The future of legal AI will not be decided by who has access to the best model. That advantage will always be temporary.
It will be decided by who has built the deepest understanding of how contracts actually work in practice – how they are negotiated, amended, relied upon, and analysed over time, and who can translate that understanding into reliable outcomes at scale.
In this game, there will be winners and losers. This will not kill off the industry; this is not the end of legal AI.
It is the end of CLM as a sufficient solution.
It is the end of wrapper-based AI as a durable strategy.