You cannot prompt your way out of a confidentiality breach

Why the governance race in legal AI may just be beginning

You cannot prompt your way out of a confidentiality breach. That reality is beginning to shape a new conversation in legal AI.

For the past few years, most of the industry's attention has focused on capability: models that can read contracts, summarise clauses and draft documents. The pace of improvement has been notable.

But as AI begins to move from experimentation into production environments, another question becomes unavoidable: governance.

Recent developments across the legal AI ecosystem suggest that trust will not be built through better instructions to AI systems, but through how access, workflows and decisions are constrained by the surrounding infrastructure.

We spoke with Precisely founder Nils-Erik Jansson about why the next phase of legal technology may depend less on what AI can do and more on how legal processes are structured and controlled.

Several major integrations were announced recently. What stood out to you?

Nils-Erik: The Harvey and Intapp integration. I've seen a lot of integrations announced. Most of them are about making the AI more capable, faster, broader, more accurate. This one was different. It wasn't about what the model can do. It was about what the system won't allow. Ethical walls are decades old. Law firms have relied on them to manage conflicts. They are a compliance mechanism, not just a technical one. What this integration does is bring that same logic into the AI layer. The model doesn't just get access to information. The system decides what the model is allowed to see, based on rules that exist independently of the model.

That is a meaningful architectural shift. And it is the direction that enterprise AI in legal services needs to go. Tools that are capable but not governed are a liability. Tools that are governed at the infrastructure level are what organisations can actually trust with sensitive data.

For organizations evaluating how AI and data governance intersect in contract management specifically, see Why Data Sovereignty Matters for Contract Management and What It Means for AI and AI in CLM: Separating Value from Hype.

What does this tell us about the direction of travel for legal AI more broadly?

Nils-Erik: It tells us that the conversation is maturing. Early legal AI was mostly about capability. Can it read a contract? Can it summarise? Can it find the renewal clause? Those are real questions, and the answers have improved. But capability without governance is not something that legal and compliance teams can act on. You cannot deploy a tool in a regulated environment just because it is impressive. You need to be able to explain how it works, what it sees, and what safeguards exist.

The shift we are seeing now is towards infrastructure. Governance, residency, access control, audit trails. These are not features. They are preconditions. And the vendors who understand that are building accordingly. The vendors who don't are going to run into walls as they move upmarket. For Precisely's own approach to responsible AI in contracting, see Smarter Contracting with AI: Inside Precisely's Approach to Responsible Innovation.

Continue reading

You may be wondering...

Can AI tools cause confidentiality breaches in legal work?
Yes. If AI tools have unrestricted access to contract data, they may process or expose confidential information in ways that breach obligations. The risk is not the AI itself, but the absence of structural controls over what data the model is permitted to access.
Why is AI governance in legal technology an architectural issue?
AI governance cannot be solved through prompting or user instructions alone. It requires the underlying system to enforce access rules independently of the model. If the architecture permits unrestricted data access, no prompt-level instruction can reliably prevent a confidentiality breach.
What are ethical walls in the context of AI and legal technology?
Ethical walls are access controls that prevent certain information from being shared with specific individuals or systems. Applied to AI, they ensure the model only accesses data it is permitted to see — regardless of what a user might prompt it to retrieve.
How should organisations evaluate AI governance in CLM platforms?
Organisations should ask whether AI features operate within the same access controls as the rest of the platform, whether data sent to AI models is limited to what is necessary for the task, and whether users can opt out of AI processing for sensitive contracts.
If you have any further questions or just want to reach our team, click the button below.
Contact us
Contact us