As the EU AI Act enters implementation, organizations developing (or “providing” in the Act’s parlance), using (or “deploying”), importing, and distributing high-risk AI systems will face new obligations set out in Sections 2 and 3 of the Act.
Of these, providers and deployers will face the most substantive, structured set of obligations such as those found in Articles 9 to 15. These requirements are designed to ensure that identified high-risk AI systems do not undermine the fundamental rights, safety, and health of European citizens.
The AI Act defines four risk tiers: prohibited, high-risk, limited-risk, and minimal-risk. Prohibited systems are banned outright (Article 5), and limited-risk systems face only light transparency duties (e.g., chatbot disclosures). In contrast, high-risk systems come with the most detailed compliance burden, especially for providers and deployers. That’s why we’re focusing here: These are the rules most likely to impact organizational processes, procurement, and oversight.
Dataiku Govern, our AI governance solution, helps organizations operationalize these obligations, from risk management to post-market monitoring. In this article, we explain what each Article requires and where further guidance from the European Commission is still expected. This is essential reading for non-technical teams preparing for compliance or planning procurement.
In addition to the scope laid out, which indicates high-risk AI systems have potential to undermine identified protected public interests, the AI Act provides further definition by laying out specific use cases (found in Annex III). We tend to talk about these use cases as sitting within Domains, which include: biometrics; critical infrastructure; education and vocational training; employment, workers’ management and access to self-employment; access to and enjoyment of essential private and public services and benefits; law enforcement; migration, asylum and border control management; and the administration of justice and democratic processes. Within each of these Domains, high-risk Use Cases or applications are clarified. The Commission has made clear that these Domains and Use Cases may evolve over time.
In addition to the Domains and Use Cases approach, the definition of high-risk AI systems extends to AI systems covered by specific harmonization legislation (found in Annex I). This includes AI systems that are safety components of products already regulated under EU product safety laws, such as those found in medical devices (e.g., the Medical Device Regulation) or machinery (e.g., the Machinery Regulation).
Key deadlines to know:
If your organization builds, buys, or uses AI in these areas, you are likely affected, and preparation should start now.
Before diving into compliance, you must first know with confidence whether your organization develops or uses AI systems that fall into the high-risk category. Without this clarity, you are exposed to significant risk. This assessment isn't always black and white, as some specialized use cases may require consultation with legal teams. For example, a fraud detection agent might be used in a way that assesses fraud risk and embeds this into an insurance premium cost. Whether this qualifies as a high-risk system requires consideration: AI to detect fraud is not inherently high-risk; but using AI to determine the costs of insurance is. Therefore, the first step for any organization is a thorough, documented assessment of your entire AI portfolio to qualify which systems are impacted.
So you've assessed your portfolio and identified your high-risk systems. What's next? In the rest of this article, we'll walk through the core obligations outlined in Articles 9-15. While these are critical, there's more to the story of the EU AI Act, including the roles of different actors, compliance for general-purpose AI, and how enforcement will work (which we'll cover in future updates).
What we know:
You must implement a documented, ongoing risk management process covering the entire AI lifecycle, from design to post-market monitoring. This includes identifying and evaluating known and foreseeable risks to health, safety, and fundamental rights.
What’s ambiguous:
What we know:
AI systems must be trained, validated, and tested on datasets that are relevant, representative, free of errors, and complete, within reasonable bounds.
What’s ambiguous:
What we know:
You must maintain detailed technical documentation proving compliance (including system design, intended purpose, training data sources, testing methods, and risk controls) as outlined in Annex IV of the Act.
What’s ambiguous:
What we know:
High-risk systems must automatically log events to support traceability, performance tracking, and post-market monitoring. Logs must be tamper-resistant and retained appropriately.
What’s ambiguous:
What we know:
Users must be informed in clear terms about:
What’s ambiguous:
What we know:
You must design systems to ensure effective human oversight that prevents or minimizes risks. Oversight mechanisms must be documented, and people assigned to oversight roles must be adequately trained.
What’s ambiguous:
What we know:
High-risk AI systems must maintain appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle, and resist errors, misuse, or adversarial attacks.
What’s ambiguous:
Articles 9–15 interact closely with post-deployment monitoring obligations. For example:
We are awaiting further guidance on how to align these operational tasks with compliance requirements.
In order to ensure stakeholders and teams are prepared for meeting new obligations once they hit the market, many organizations are taking steps towards compliance despite existing ambiguities. While this may introduce a need to revise the operational approximations of what is expected from guidance later on, the goal is to shrink the distance between current practices today and practices outlined under guidance once it becomes available.
Even while waiting for guidance, some examples of what non-technical stakeholders can do to begin preparing:
Dataiku Govern is designed to help organizations secure documentation and controls over their AI systems today. This includes support for AI Act readiness as well as other diverse compliance activities.
The Commission is expected to publish implementation guidelines in the second half of 2025. In the meantime, early preparation, guided by the structure of Articles 9–15, is the best way to stay ahead of the curve and demonstrate responsible AI leadership.
Stay tuned: We will continue publishing actionable updates to help you operationalize the EU AI Act with clarity and confidence.