The EU AI Act: What we know so far

This month, December 2023, the European Parliament got to a political agreement on the EU AI Act. The EU AI Act will be the first comprehensive law on AI in the EU. Discussions started last June 2023, but a lot of details are still up in the air. A final text will be presented in 2024. The regulation will come into effect in 2026. Just like the GDPR did in 2018, this new regulation is expected to shake up the business world, not just in the EU, but in the world at large. Here is what we know so far – and what your organization can do to prepare for it now.

Summary of the EU AI Act

The EU has introduced the AI Act, the world’s first comprehensive regulation on artificial intelligence. This law aims to regulate the development and use of AI technology to ensure better, safer, and fairer conditions for its users. AI Risk Classification: The AI Act classifies AI systems based on the risk they pose to users, ranging from unacceptable risk (prohibited) to high risk (heavily regulated) to limited and minimal risk (lighter requirements).

For contract management teams, the EU AI Act has specific implications for any AI-assisted features in CLM platforms: clause generation, risk scoring, and automated review tools may all fall under the Act’s requirements depending on how they are used and what decisions they influence. For a grounded look at how AI actually works in CLM today and where the real governance risks lie, see AI in CLM: Separating Value from Hype.

What the EU AI Act means in practice

For in-house legal and compliance teams evaluating AI-enabled CLM tools, the EU AI Act introduces a new due diligence requirement: understanding how AI features in your contract management platform work, what data they process, and what oversight mechanisms are in place. This aligns directly with broader data governance obligations under GDPR. For a look at how data sovereignty intersects with AI in contract management, read Why Data Sovereignty Matters for Contract Management and What It Means for AI. For Precisely’s own approach to responsible AI innovation, see Smarter Contracting with AI: Inside Precisely’s Approach to Responsible Innovation.

Continue reading

You may be wondering...

What is the EU AI Act?
The EU AI Act is the world's first comprehensive regulation on artificial intelligence. It classifies AI systems by the risk they pose — from unacceptable risk (prohibited) to high risk (heavily regulated) to limited and minimal risk — and sets requirements for transparency, human oversight, and technical documentation accordingly.
What is the EU AI Act's risk classification system?
The EU AI Act classifies AI systems into four categories: unacceptable risk (banned), high risk (requiring conformity assessments and human oversight), limited risk (transparency obligations), and minimal risk (no specific requirements). Most AI features in commercial CLM tools likely fall into the limited or high-risk categories.
How does the EU AI Act affect AI features in CLM platforms?
CLM platforms using AI for clause generation, risk scoring, or automated contract review may fall within the Act's scope depending on how those features are used. Vendors should be able to demonstrate transparency about their AI models and support human oversight of AI-assisted decisions.
What should legal teams know about using AI tools under the EU AI Act?
Legal teams should ensure they understand what decisions the AI is influencing, whether the system is classified as high risk, what transparency obligations apply, and what human oversight mechanisms are in place. AI-assisted contract review outputs should be documented and overridable.
If you have any further questions or just want to reach our team, click the button below.
Contact us
Contact us