AI Act: High-Risk AI Systems: What Are the Challenges and Obligations?

I. Definition of a High-Risk AI System

Regulation (EU) 2024/1689 (AI Act or RIA)1 classifies AI systems based on the risks they pose. Among these systems, the RIA defines high-risk AI systems (Art 6) as those that can harm the health, safety, or fundamental rights of individuals.

1. Which AI Systems Are Considered High-Risk?

An AI system is considered high-risk if:

  • It is a safety component of a product regulated by one of the European standards listed in Annex I of the AI Act or constitutes such a product itself (Art 6.1.a);
  • It is a safety component of a product subject to mandatory third-party conformity assessment before being placed on the market. This requirement applies to products covered by the European regulations listed in Annex I (Art 6.1.b);
  • It belongs to a category listed in Annex III of the regulation;
  • It is an autonomous system (neither a safety component nor a product in itself) presenting a high risk to health, safety, or fundamental rights. This classification is based on two criteria:
    • The severity and probability of potential harm.
    • Its use in sensitive areas defined by the regulation.

2. Examples of High-Risk AI Systems

Annex III of the RIA lists high-risk AI systems in various domains, such as:

  • Biometrics (e.g., remote biometric identification AI systems)
  • Critical infrastructure (e.g., AI systems used as safety components in the management and operation of critical digital infrastructure)
  • Education and vocational training (e.g., AI systems used to assess learning outcomes)
  • Employment, workforce management, and access to self-employment (e.g., AI systems used for recruiting or selecting individuals)
  • Access to essential private services and public services and essential social benefits (e.g., AI systems for assessing eligibility for essential social benefits and services)
  • Law enforcement (e.g., AI systems for evaluating the reliability of evidence in criminal investigations or prosecutions)
  • Migration, asylum, and border control management (e.g., AI systems for assessing risks such as irregular migration)
  • Administration of justice and democratic processes (e.g., AI systems for researching and interpreting facts or law).

II. What Are the Obligations Related to High-Risk AI Systems?

The RIA imposes a framework to ensure the compliance of these AI systems by distinguishing two types of obligations: general obligations applicable to all high-risk AI systems (Art 8 to 15) and specific obligations based on the role of the operators involved (e.g., providers, deployers, and others).

In accordance with AI-related GDPR standards, maintaining good data governance is essential.

1. General Obligations

All high-risk AI systems must comply with the following obligations:

  • Risk Management (Art 9): Implement a system for risk assessment and management throughout the system’s lifecycle, similar to the Privacy Impact Assessment (PIA) under GDPR;
  • Data Quality and Governance (Art 10): Ensure that training, validation, and testing data meet strict quality and governance criteria;
  • Technical Documentation (Art 11): Provide complete and accurate documentation of the AI system, compliant with the 23 requirements detailed in Annex IV of the regulation;
  • Event Logging (Art 12): Ensure traceability by recording all significant events related to the system’s operation;
  • Transparency Towards Deployers (Art 13): Clearly inform deployers about the system’s operation, risks, and limitations through a detailed information notice;
  • Human Oversight (Art 14): Ensure a human-machine interface that allows for human supervision and intervention;
  • Accuracy, Robustness, and Cybersecurity (Art 15): Implement measures to ensure reliable performance levels, avoid vulnerability exploitation, etc.

2. Specific Obligations Based on Operators

Providers, whether organizations or individuals, are those who develop an AI system or general-purpose AI model and place it on the market or in service under their own name or brand, whether for a fee or free of charge (obligations Art 16 and following).

Examples of provider obligations:

  • Affixing a CE marking (Art 16.h) and issuing a declaration of conformity (Art 18.1.e);
  • Keeping technical documents and evidence of conformity for 10 years (Art 18);
  • Retaining automatically generated logs for at least six months (Art 19);
  • Taking corrective measures in case of non-compliance with their obligations (Art 20).

Deployers are those who use an AI system under their own authority, except for personal non-professional use (Obligations Art 26 and 27):

  • Use the AI system in accordance with the provider’s instructions (Art 26.1);
  • Implement human oversight of the AI system (Art 26.2);
  • Notify the provider, then the importer or distributor, as well as the relevant supervisory authorities, of any serious incident (Art 26.5);
  • Conduct an impact assessment on the use of the AI system (Art 27).

III. What Sanctions Apply in Case of Non-Compliance?

Avoiding heavy sanctions requires a precise understanding of the legal framework for AI.

Sanctions for non-compliance with the RIA obligations (Article 99) are primarily financial and will come into effect on August 2, 2025 (Art 113.b). These sanctions vary depending on the severity of the infringement:

1. Use of a Prohibited AI System: 

Fine of up to €35 million or 7% of the global annual turnover (Art 99.3).

2. Non-Compliance with Operator or Notified Body Obligations: 

Fine of up to €15 million or 3% of the global annual turnover, whichever is higher (Art 99.4).

3. Providing Inaccurate or Misleading Information to National Authorities: 

Fine of up to €7.5 million or 1% of the global annual turnover (Art 99.5).

4. Criteria for Assessing Sanctions (Art 99.7)

The amount of fines depends on several factors, including:

  • Nature, duration, and consequences of the violation, and the number of people affected (severity and impact of the infringement);
  • Previous fines for the same violation or similar infringements (history of sanctions);
  • Turnover, market share, and financial capacity (size and influence of the operator);
  • Profits obtained or losses avoided through the violation (aggravating or mitigating factors);
  • Efforts made to rectify the situation (cooperation with authorities);
  • Technical and organizational measures implemented (level of responsibility);
  • Voluntary declaration by the operator or detection by an authority (mode of discovery of the infringement);
  • Whether the violation was committed intentionally or through negligence (intention);
  • Actions taken to mitigate damages (corrective measures).

5. Special Case for SMEs and Startups

For small businesses and startups, the regulation provides a proportionality clause to ensure that fines remain appropriate to their financial situation (the lower amount will be applied in case of infringement) (Art 99.6).

6. Non-Financial Sanctions

Authorities may also require:

  • Withdrawal of the non-compliant AI system from the market (Art 83.2);
  • Recall of products already in circulation (Art 83.2).

7. Other Sanctions to Consider?

Since AI systems may process personal data, there is an overlap between the AI Act and the GDPR2. In this regard, it is important to note that other sanctions from European regulations may apply. The CNIL3, in the context of GDPR implementation, has specified that formal notices may be issued, requiring the processing to be made compliant or limited, whether temporarily or permanently (possibly under penalty). Additionally, administrative fines of up to €20 million or 4% of the global annual turnover may also be imposed.

IV. How to Design a High-Risk AI System Compliant with the Regulatory Framework?

The lifecycle of a high-risk artificial intelligence system requires rigorous oversight from its inception. Chapter III of the AI Act imposes requirements on providers and developers to ensure the compliance of these systems before they are placed on the market.

Special attention must be given to emotional AI systems that analyze human emotions through physiological signals. Despite their applications in marketing, education, and HR, they pose ethical risks, particularly in recruitment, where biases can lead to discrimination. The AI Act strictly regulates their use, especially in sensitive areas such as employment and education (Annex III).

A high-risk AI system must be designed from the outset to comply with principles of security, ethics, and transparency. This relies on by-design methods, similar to the privacy by design concepts of the GDPR. Providers must therefore:

  • Assess Risks Upfront: The AI Act imposes a risk management framework covering the entire lifecycle of a system, from development to operation (Art. 9). An AI system must be tested in various scenarios to identify potential vulnerabilities and limit any harmful side effects. For example, a recruitment algorithm must avoid any unintentional discrimination due to algorithmic bias.
  • Ensure Data Quality: Article 10 requires that the databases used to train an AI system meet strict standards. An AI model predicting employee attrition must be trained on balanced datasets to avoid amplifying systemic biases.
  • Integrate Sufficient Human Oversight: Article 14 requires a level of human supervision to prevent harmful automated decisions. For example, a system evaluating loan risks must include a human validation process before any automatic refusal.
  • Ensure Robustness and Cybersecurity: Article 15 mandates securing AI systems against adversarial attacks and ensuring their reliability. A medical AI evaluating MRI results must demonstrate consistent accuracy and reliability in various environments. Additionally, effective cyber crisis management mechanisms must be in place to respond quickly to potential threats or attacks targeting these systems.

Every high-risk AI system must undergo a conformity assessment procedure before being placed on the market. This involves compiling a complete technical file (Art. 11), implementing a quality management system (Art. 17), and, in some cases, certification by a third-party organization.

V. What Impact for Companies and Providers of High-Risk AI Systems?

The AI Act does not only concern tech giants. Any company developing, marketing, or using an AI system that falls into the high-risk category must adapt its practices.

1. Adapting Companies to New Requirements 

Economic actors must prepare for a new, demanding regulatory framework that affects not only developers but also companies deploying these technologies in their operations. 

For example, a bank using AI to grant loans must justify the fairness and transparency of its algorithm to regulatory authorities. A hospital wishing to integrate AI for diagnostic assistance must prove that its system meets the robustness and reliability standards set by the regulation.

Companies must therefore establish dedicated teams to ensure their compliance, especially in sectors where AI is integrated into critical processes: finance, healthcare, transportation, security, etc.

2. Increased Role of AI Providers 

AI providers now play a central role, as they must ensure that their systems meet transparency, security, and compliance requirements before being placed on the market. Article 16 of the regulation specifically requires them to:

  • Maintain a quality management system (Art. 17) that guarantees the traceability and auditability of the system.
  • Provide complete technical documentation to users and regulatory authorities.
  • Implement continuous monitoring of the AI system to ensure it remains compliant and that no unforeseen risks emerge after deployment.

European companies using AI solutions developed by non-European providers must also ensure that these systems comply with the European regulatory framework, or risk sanctions.


Aumans Avocats: Specialists in IT/Data, Data Protection, and DPO Outsourcing

As a law firm specializing in IT/Data and data protection, we are at your service to support you in all your projects. Whether you are a startup, an SME, or a group of companies, our expertise will help you navigate the complex landscape of regulations and compliance with confidence. Do not hesitate to contact us for personalized advice, assistance with your GDPR compliance, and to secure your digital future.


Sources

  1. https://eur-lex.europa.eu/eli/reg/2024/1689/oj?locale=en – Regulation (EU) 2024/1689 ↩︎
  2. https://eur-lex.europa.eu/eli/reg/2016/679/oj?locale=en – GDPR ↩︎
  3. https://www.cnil.fr/fr/entree-en-vigueur-du-reglement-europeen-sur-lia-les-premieres-questions-reponses-de-la-cnil ↩︎

AUMANS AVOCATS (formerly FOUSSAT AVOCATS & DEROULEZ AVOCATS)
AARPI
Paris +33 (0)1 85 08 54 76 / Lyon +33 (0)4 28 29 14 92 /
Marseille 
+33 (0)4 84 25 67 89 / Bruxelles +32 (0)2 318 18 36

Contact us

Categories

Share

Related Articles

RGDP Définition

What is GDPR?

Introduction : The main principles of the GDPR. Today we will discuss the principles of the GDPR. When we talk

Read more »