I. A Legislative Framework for Regulating Artificial Intelligence within the European Union
Adopted by the European Union, Regulation (EU) 2024/16891, also known as the Artificial Intelligence Regulation (AIR) or AI Act, came into force on August 1, 20242. Its objective? To establish a harmonized framework for the marketing, deployment, and use of artificial intelligence systems (AIS) within the EU. It is part of a broader digital regulation initiative alongside the Data Act, which aims to regulate access to and sharing of data within the European Union.
This text aims to address two key challenges: fostering innovation while protecting the fundamental rights of individuals, democracy, and the rule of law.
To achieve this, the AIR adopts a risk-based approach, classifying AIS according to their level of risk:
- Unacceptable risk -> prohibition (Art 5);
- High risk -> strict obligations (Art 6);
- Limited risk -> transparency obligations (Art 50);
- Minimal or no risk -> flexible framework (Art 95).
II. Scope of the AIR
1. Material Scope: Which AI Systems Are Covered by the Regulation?
The AIR covers both AIS and general-purpose AI models (Art 3), but some are exempt from its scope:
- AIS for military, defense, or national security purposes (Art 2.3);
- Systems and models for scientific research and development, as well as their applications (Art 2.6);
- The AIR does not apply to individuals using AI for personal purposes unrelated to professional activities (Art 2.10).
2. Territorial Scope: Which Activities Are Covered by the AIR?
The main operators and actors targeted by this new regulation (Art 2.1) are:
- Providers established in the EU or in a third country who market or deploy AIS or market AI models;
- Deployers of AIS established in the EU;
- Providers and deployers established outside the EU if the AI outputs are used within the EU. Outputs refer to predictions, content, recommendations, or decisions generated by an AIS (Art 3.1);
- Importers of AIS;
- Distributors of AIS;
- Manufacturers who market an AIS or deploy it with their product under their name or brand;
- Authorized representatives of providers established outside the EU.
It is important to note that the AIR is not limited to the borders of the EU. Indeed, it applies to all AIS introduced into the European market. In other words, as soon as an AIS is marketed or deployed within the EU, the provisions of the AIR may potentially apply.
Furthermore, the regulation covers AIS outside the EU whose outputs and data are used within the EU. This principle of extraterritoriality of the AIR is reminiscent of the GDPR, which applies to data controllers processing the personal data of European citizens, even if they are located outside the EU.
Actors must also ensure that their practices comply with the GDPR obligations in force.
3. Timeline for the Implementation of the AIR
The AIR will be applied progressively, following this timeline:
- February 2, 2025 -> Prohibition of AIS with unacceptable risk (Chapters I and II).
- August 2, 2025 -> Implementation of specific measures for general-purpose AI and designation of competent authorities at the national level. Application of Chapter III Section 4, Chapters V, VII, XII, and Article 78, with the exception of Article 101.
- August 2, 2026 -> General application of the AIR.
- August 2, 2027 -> Implementation of rules concerning high-risk AI specified in Annex I.
III. Classification and Obligations Related to AI Systems
Not all AI systems present the same degree of risk. Therefore, the regulation adapts its obligations according to the level of risk. Here are the four main categories of AIS governed:
1. Unacceptable Risk AIS: Which AI Systems Are Prohibited?
Certain AIS are deemed unacceptable because they violate the fundamental principles of the EU and human rights. Article 5 specifies the systems concerned, such as social scoring, prediction of future crimes, or exploitation of individuals’ vulnerabilities to influence their behavior.
2. Which AI Systems Are Considered High Risk?
Described in Article 6 and listed in Annex III, these systems are likely to harm the health, safety, or rights of individuals. Annex III includes high-risk AIS in the following domains:
- Biometrics (e.g., remote biometric identification AIS);
- Critical infrastructures (e.g., AIS used as a security component in the management and operation of critical digital infrastructures);
- Education and vocational training (e.g., AIS used to assess learning outcomes);
- Employment, workforce management, and access to self-employment (e.g., AIS used for recruiting or selecting individuals);
- Access to essential private services and public services and social benefits (e.g., AIS for assessing eligibility for essential social benefits and services);
- Law enforcement (e.g., AIS for evaluating the reliability of evidence in criminal investigations or prosecutions);
- Migration, asylum, and border control management (e.g., AIS for assessing risks such as irregular migration);
- Administration of justice and democratic processes (e.g., AIS for researching and interpreting facts or law).
3. Limited Risk AIS
According to Article 50 of the AIR, certain AI systems, notably those generating deepfakes, can be used for manipulation purposes. Although considered to present a limited risk, they are subject to transparency and information obligations. For example, “Deployers of an AI system that generates or manipulates images or audio or video content constituting a deepfake must indicate that the content has been generated or manipulated by AI” (Art 50.4).
4. Minimal or No Risk AIS
These are AIS that present a negligible risk to individuals. These systems are governed by codes of conduct that will be defined later, according to Article 95 of the AIR.
IV. Prohibited AI Systems
Chapter II, which addresses the prohibition of unacceptable risk AIS, came into effect on February 2, 2025, and includes the following prohibitions:
| AIR Article | Prohibited | Description |
| 5(1)(a) | Manipulation and Deception | AIS using subliminal, manipulative techniques beyond a person’s or group’s consciousness, or deliberately deceptive techniques aimed at distorting/altering behaviors or causing significant harm. |
| 5(1)(b) | Exploitation of Vulnerabilities | AIS exploiting vulnerabilities related to age, disability, or a specific social or economic situation to distort behavior or cause significant harm. |
| 5(1)(c) | Social Scoring | AI systems evaluating or classifying individuals or groups based on their social behavior or personal characteristics, leading to unfavorable or harmful treatment. |
| 5(1)(d) | Prediction of Criminal Offenses | AIS evaluating or predicting the risk of a person committing a criminal offense based solely on personality traits or characteristics. Exception to this prohibition to support a human assessment based on objective and verifiable facts. |
| 5(1)(e) | Scraping for Facial Recognition | AIS that create or develop facial recognition databases through untargeted scraping of facial images from the internet or surveillance videos. |
| 5(1)(f) | Emotion Recognition | AIS that infer emotions in the workplace or educational institutions, except for medical or security reasons. |
| 5(1)(g) | Biometric Categorization | AIS that categorize individuals based on their biometric data to infer race, political opinions, trade union membership, religious or philosophical beliefs, or sexual orientation. |
| 5(1)(h) | Remote Biometric Identification | AIS used for real-time remote biometric identification in public spaces for law enforcement purposes, except if necessary for targeted searches for victims, prevention of specific threats, or locating and identifying an individual as part of a criminal investigation, prosecution, or enforcement of a criminal penalty. |
V. Obligations for Companies and Developers
Implementation of a Risk Management System for AIS
Companies must implement a risk management system for AI, in accordance with Article 9 of the AIR. This includes:
- A prior impact analysis of AIS on fundamental rights;
- Continuous monitoring and audits to ensure compliance with the regulation’s requirements.
Role of the National Coordinator for Artificial Intelligence
Each Member State must designate a national coordinator for artificial intelligence (Art. 59). Their role is to:
- Oversee the implementation of the AIR at the national level;
- Provide advice to companies and developers on the compliance of AI systems;
- Coordinate with the European Commission and supervisory authorities.
VI. What penalties apply in case of non-compliance?
Financial penalties and legal liability of companies
The AI Act provides for significant penalties in case of non-compliance with the regulatory framework. These penalties vary depending on the severity of the infringements:
- Up to €35 million or 7% of global turnover for the most serious infringements (e.g., placing a prohibited AI system on the market).
- Up to €15 million or 3% of turnover for non-compliance with the obligations of high-risk AI systems.
- Up to €7.5 million or 1.5% of turnover for failure to cooperate with authorities.
How can companies comply?
To minimize the risk of penalties, companies should anticipate the implementation of the AI Act by:
- Conducting regular audits of their AI systems;
- Maintaining records documenting the use of AI systems;
- Implementing a policy of transparency and control over the systems placed on the market.
Aumans Law Firm: Specialists in IT/Data, Data Protection, and DPO Outsourcing
As a law firm specializing in IT/Data and data protection, we are at your service to support you in all your projects. Whether you are a startup, an SME, or a group of companies, our expertise will help you navigate the complex landscape of regulations and compliance with confidence. Do not hesitate to contact us for personalized advice and to secure your digital future.


