You cannot prompt your way out of a confidentiality breach

Why the governance race in legal AI may just be beginning

You cannot prompt your way out of a confidentiality breach. That reality is beginning to shape a new conversation in legal AI.

For the past few years, most of the industry's attention has focused on capability: models that can read contracts, summarise clauses and draft documents. The pace of improvement has been notable.

But as AI begins to move from experimentation into production environments, another question becomes unavoidable: governance.

Recent developments across the legal AI ecosystem suggest that trust will not be built through better instructions to AI systems, but through how access, workflows and decisions are constrained by the surrounding infrastructure.

We spoke with Precisely founder Nils-Erik Jansson about why the next phase of legal technology may depend less on what AI can do and more on how legal processes are structured and controlled.

Several major integrations were announced recently. What stood out to you?

Nils-Erik: The Harvey and Intapp integration. I've seen a lot of integrations announced. Most of them are about making the AI more capable, faster, broader, more accurate. This one was different. It wasn't about what the model can do. It was about what the system won't allow. Ethical walls are decades old. Law firms have relied on them to manage conflicts. They are a compliance mechanism, not just a technical one. What this integration does is bring that same logic into the AI layer. The model doesn't just get access to information. The system decides what the model is allowed to see, based on rules that exist independently of the model.

That is a meaningful architectural shift. And it is the direction that enterprise AI in legal services needs to go. Tools that are capable but not governed are a liability. Tools that are governed at the infrastructure level are what organisations can actually trust with sensitive data.

For organizations evaluating how AI and data governance intersect in contract management specifically, see Why Data Sovereignty Matters for Contract Management and What It Means for AI and AI in CLM: Separating Value from Hype.

What does this tell us about the direction of travel for legal AI more broadly?

Nils-Erik: It tells us that the conversation is maturing. Early legal AI was mostly about capability. Can it read a contract? Can it summarise? Can it find the renewal clause? Those are real questions, and the answers have improved. But capability without governance is not something that legal and compliance teams can act on. You cannot deploy a tool in a regulated environment just because it is impressive. You need to be able to explain how it works, what it sees, and what safeguards exist.

The shift we are seeing now is towards infrastructure. Governance, residency, access control, audit trails. These are not features. They are preconditions. And the vendors who understand that are building accordingly. The vendors who don't are going to run into walls as they move upmarket. For Precisely's own approach to responsible AI in contracting, see Smarter Contracting with AI: Inside Precisely's Approach to Responsible Innovation.

Continue reading