Several major integrations were announced recently. What stood out to you?
Nils-Erik:
The Harvey and Intapp integration.
I’ve seen a lot of integrations announced. Most of them are about making the AI more capable, faster, broader, more accurate. This one was different. It wasn’t about what the model can do. It was about what the system won’t allow.
Ethical walls are decades old. Law firms have had conflict policies forever. What’s interesting here is where the control actually sits. Intapp’s policies flow into the system and determine what the model can see before any question gets asked. The model doesn’t get instructed to respect the wall. It just can’t see through it.
That’s a different kind of answer to a governance problem.
Why is that distinction important?
Nils-Erik:
Because instructions are suggestions. Good ones, but still suggestions.
If you tell a model “do not access matter A while working on matter B,” you’re relying on the model to honour that. And mostly it will. Until something in the context or the prompt pushes it somewhere it shouldn’t go. There’s no structural reason it can’t. The constraint is a sentence, not a wall.
A confidentiality breach in a legal context isn’t a minor UX failure. It can end client relationships, trigger regulatory investigations, expose firms to liability. You can’t walk that back with a better system prompt in the next release.
What changed with the Intapp integration is that the question of access is settled before the model is involved. That’s a meaningful shift. And honestly, it’s a more honest answer to the problem than most of what I’ve seen.
Has legal technology seen this kind of shift before?
Nils-Erik:
Yes, and it took longer than it should have.
Fifteen years ago, governance in contract work lived in Word templates and style guides. There were policies about which clauses to use, how to structure terms, what to do above certain thresholds. And they worked, until someone had a deadline and edited the wrong section and sent it anyway. Because the instruction was right there in the document. So was the delete key.
CLM moved the governance out of the document and into the system. Instead of telling users what not to change, you removed the ability to change it. They answered questions. The system built the contract. The approved language wasn’t somewhere they could accidentally overwrite. It wasn’t a field at all.
That’s what we were trying to do at Precisely. The constraint isn’t a feature. It’s the design principle. You don’t add it after you’ve built everything else.
“Governance stopped being a policy and became a constraint.”
If governance becomes architectural rather than policy-based, what does that actually change for how legal systems are designed?
Nils-Erik:
It changes what you ask first.
The default design question is usually some version of “what do we want users to be able to do?” You build that, then add rules around the edges about what they shouldn’t do. That’s a natural way to design products. It’s also how you end up with a blank text field next to an approved clause and a note in the UI saying “please don’t edit this.”
The question I kept coming back to at Precisely was the other one: what should be impossible? What is it that, if someone could do it, would undermine the whole point of the system? Answer that first and the design follows. Fixed outputs. Structured inputs. No escape hatch.
For legal AI the questions translate fairly directly. Which data should the system structurally not be able to retrieve during a given matter? What actions require a human decision before the workflow continues? Where does the system stop entirely rather than issue a warning that someone will probably dismiss?
These are architecture questions, not configuration questions. And the time to answer them is before you build, not after something goes wrong in production.
The comparison with CLM is interesting. Do you think legal AI will follow a similar path?
Nils-Erik:
Probably, but the external pressure is completely different.
CLM had no forcing function. The shift happened because a handful of legal teams cared enough to insist on it, slowly, over years. Get the governance wrong and you had frustrated clients and slow adoption. That was it. Nobody was coming after you.
Legal AI has courts already sanctioning attorneys for AI errors. Regulatory deadlines arriving in months. In-house counsel personally accountable for what their tools produce. That changes the urgency considerably.
The commercial pressure is identical though. When the ChatGPT craze hit and everyone was rushing to add a chat interface to everything, we had that conversation internally. It would have made the sales team’s life easier. We didn’t do it. Adding a chat interface on top of a structured workflow system would have undermined the thing that made the system worth having. The capability feature would have eaten the governance product. I think a lot of companies in legal AI are facing a version of that choice right now, whether they’re calling it that or not.
What may take longer is the same thing that took longest in CLM: getting organisations to actually define their rules before asking technology to enforce them. The architecture is the easier part. Deciding what the guardrails should be, that still requires someone willing to be accountable for them. And that’s a people problem, not a technology problem.
Where do you see the biggest gap today between what legal AI tools promise and what governance actually requires?
Nils-Erik:
The demo and the production environment are still very different things.
In a demo, the model reads a contract quickly and flags issues accurately. That part is real, and it’s genuinely impressive. But in production the questions are different: which matters can this model see? Who authorised this workflow to continue? If a clause recommendation turns out to be wrong, where’s the audit trail? Does the privilege analysis hold if the provider’s terms allow them to train on input data?
General counsel and regulators are already asking these questions. They’re not edge cases anymore.
Most tools are built to make the demo land. I was guilty of this too. You need the demo to land or you don’t get a second meeting. But governance was always the thing clients raised after the demo, not during it. The teams I’d pay attention to now are the ones treating it as infrastructure rather than a compliance checkbox. One high-profile breach at a well-known firm will set the whole sector back. Not a prediction, just what happened every time a tech wave hit legal without the governance layer being ready.
What happens next?
Nils-Erik Jansson
For the past few years the industry has been running a capabilities race. Who has the most capable model. Who can analyse more contracts. Who can draft faster.
That phase is not over, but something else is starting now: the governance race.
Adoption will depend less on how impressive the model looks in a demo and more on whether the surrounding systems make certain failures structurally impossible.
Do you think the industry fully appreciates that shift yet?
Nils-Erik:
Honestly, no. Not yet.
The VC model rewards capability announcements. “Three times faster contract review” raises a round. “Users cannot bypass the approval workflow” does not. So naturally that’s what gets built and showcased. I’m not saying it’s cynical. It’s rational. You build what the market rewards.
But the market is starting to catch up with what the actual risk picture looks like. Courts are sanctioning. Colorado’s AI Act takes effect in June. The EU’s high-risk obligations land in August. When regulators and judges start asking where control actually sits, “we have a system prompt that tells it to behave” stops being a sufficient answer.
The companies doing this well already, Harvey’s integration with Intapp being the clearest example, aren’t doing it because they were forced to. They understood early that trust is the actual product. The capability gets you in the door. Governance is what keeps you there.
One takeaway?
Nils-Erik:
Trust in legal AI will not come from better prompts. It will come from better architecture.

