What Are High-Risk AI Systems?
In addition to the scope laid out, which indicates high-risk AI systems have potential to undermine identified protected public interests, the AI Act provides further definition by laying out specific use cases (found in Annex III). We tend to talk about these use cases as sitting within Domains, which include: biometrics; critical infrastructure; education and vocational training; employment, workers’ management and access to self-employment; access to and enjoyment of essential private and public services and benefits; law enforcement; migration, asylum and border control management; and the administration of justice and democratic processes. Within each of these Domains, high-risk Use Cases or applications are clarified. The Commission has made clear that these Domains and Use Cases may evolve over time.
In addition to the Domains and Use Cases approach, the definition of high-risk AI systems extends to AI systems covered by specific harmonization legislation (found in Annex I). This includes AI systems that are safety components of products already regulated under EU product safety laws, such as those found in medical devices (e.g., the Medical Device Regulation) or machinery (e.g., the Machinery Regulation).
Key deadlines to know:
- August 2, 2026 → All high-risk AI systems must comply with core requirements (Articles 9–49), including risk management, data governance, and conformity assessment.
- August 2, 2027 → Compliance deadline for high-risk AI systems embedded in regulated products (e.g., medical devices, machinery) under EU product safety laws.
If your organization builds, buys, or uses AI in these areas, you are likely affected, and preparation should start now.
Before diving into compliance, you must first know with confidence whether your organization develops or uses AI systems that fall into the high-risk category. Without this clarity, you are exposed to significant risk. This assessment isn't always black and white, as some specialized use cases may require consultation with legal teams. For example, a fraud detection agent might be used in a way that assesses fraud risk and embeds this into an insurance premium cost. Whether this qualifies as a high-risk system requires consideration: AI to detect fraud is not inherently high-risk; but using AI to determine the costs of insurance is. Therefore, the first step for any organization is a thorough, documented assessment of your entire AI portfolio to qualify which systems are impacted.
