Why AI on top of chaos is still chaos

Most legal and operations teams are not ignoring AI. They are using it. Copilot summarises documents, ChatGPT drafts first versions, and a growing number of tools promise to surface risk from contracts in seconds. The technology is genuinely useful, and the teams adopting it are making a reasonable decision.

The problem is not the AI. The problem is what the AI is working with.

And to be clear about what that means in practice: if you had no CLM before, you still have no CLM. AI does not give you control over who can commit the company to a contract. It does not give you a renewal calendar you can trust. It does not enforce approval workflows, standardise your templates, or tell you what obligations you are currently on the hook for. It just lets you do all of that, wrong, faster.

The invisible gap in every AI-first contract workflow

When a company runs AI on top of a shared drive, an email inbox, and a collection of Word documents, it gets faster access to the same fragmented information it had before. The AI can read the document. It cannot tell you which version is current, whether the approval process was followed, or what obligations are now overdue across your portfolio.

Missed auto-renewals, unchecked liability caps, and compliance requirements do not disappear because you can draft agreements more quickly. These are structural contract management challenges, not speed problems. And structural problems require structural solutions.

According to the World Commerce and Contracting Association, poor contract management costs organisations an average of 9 percent of annual revenue. That figure does not decrease when AI is added to a disorganised repository. It decreases when the underlying data model is fixed.

AI is only as good as the structure underneath it

Think of it this way. A powerful search engine works because the web is indexed. The same technology applied to an unindexed archive returns noise. Contract AI is no different. Natural language queries, risk flags, and obligation summaries all depend on structured, validated metadata to produce reliable results.

If your contracts live in folders without consistent tagging, if approval history exists only in email threads, and if there is no single source of truth for what is signed and active, then your AI tools are pattern-matching against incomplete data. The output looks plausible. It is not trustworthy.

This is not a criticism of AI. It is a description of the prerequisite. As we have written previously, you cannot fix data quality downstream, and contract data is no exception.

What CLM actually provides

A CLM platform solves a different category of problem than AI does, and that category is the one that keeps legal and ops teams up at night.

When contracts are created through structured workflows with enforced templates and approval logic, every agreement arrives in the archive with clean, consistent metadata. When that metadata is validated at the point of entry, reporting on obligations, renewal dates, and financial exposure becomes reliable rather than approximate. When permissions and audit trails are embedded in the process, you can demonstrate compliance rather than reconstruct it. These are the foundations of sound contract management best practices.

This structured layer is what gives AI something real to work with. Natural language filters, automated risk flags, and obligation tracking all perform significantly better when the underlying data has been governed from the start. It also raises broader questions about where that data lives and who controls it, something we cover in depth in why data sovereignty matters for contract management.

The decision is not either/or

No one is suggesting that legal teams choose between AI and CLM. The point is sequencing. Deploying AI on top of unstructured contracts is understandable. It is also a way of deferring the harder problem. We have written separately about how to separate genuine AI value from hype in CLM, and the distinction matters here too.

The organisations that will get the most from AI in their contract processes are those that invest first in the data infrastructure underneath it. That means consistent contract templates, governed workflows, structured metadata, and a searchable archive where every document has a clear status and owner. Ultimately, a contract is not a document, it is a strategic asset, and treating it as one is what makes AI genuinely useful rather than superficially impressive.

Once that foundation exists, AI tools can do what they are genuinely good at: surfacing patterns, accelerating review, and reducing the manual burden on legal teams who are already stretched thin. For a closer look at how Precisely approaches this, see our thinking on responsible AI innovation in contracting.

The question worth asking is not whether your AI tool is powerful enough. It is whether the contracts underneath it are structured enough for the AI to be trusted.


Precisely is a European CLM platform that helps organisations govern contracts from creation to archive. If you are evaluating how to improve your contract infrastructure, we are happy to show you how it works in practice.

Continue reading

You may be wondering...

Does AI replace the need for a CLM platform?
No. AI and CLM solve different problems. AI accelerates tasks like drafting and review, but it cannot enforce approval workflows, govern contract creation, or produce reliable obligation data on its own. CLM provides the structured foundation that makes AI output trustworthy.
What is the right order: CLM first or AI first?
CLM first. Deploying AI on top of unstructured contracts defers the underlying problem. Organisations that invest in governed workflows, consistent templates, and structured metadata first get significantly more value from AI tools once they are introduced.
Why does contract data quality matter for AI tools?
AI tools are only as reliable as the data they work with. If contracts are stored without consistent metadata, version control, or audit trails, AI queries return incomplete or misleading results. Structured, governed contract data is a prerequisite for accurate AI output.
What does a CLM platform actually do that AI cannot?
A CLM platform controls who can create and commit to contracts, enforces approval logic, maintains a validated archive with clean metadata, and provides reliable renewal and obligation tracking. These are governance functions that AI tools are not designed to perform.
If you have any further questions or just want to reach our team, click the button below.
Contact us
Contact us