I. The Genesis of the Code of Good Practices on AI
1. The AI Act and the legal framework for artificial intelligence in the EU.
The European Union has established a general framework to encourage innovation in artificial intelligence (AI) while ensuring a high level of protection of rights and security. At the heart of this framework is Regulation (EU) 2024/1689 (AI Act/RIA)1, which entered into force on 1 August 2024, accompanied by a “Code of Good Practices”2 aimed at facilitating the practical implementation of certain provisions.
2. The successive drafts of the Code of Good Practices
The European Commission commissioned a group of independent experts to propose a Code of Good Practices for General-Purpose AI. Several successive drafts (or versions) were published on November 14, 20243, then on December 19, 20244 and finally on March 11, 20255 with a more advanced third version6.
The drafters emphasize the collaborative and evolving nature of the text: a wide range of actors (from industry, academia, and civil society) have been able to contribute to refining and enriching the commitments contained in the Code. This latter text is therefore regularly subject to consultation and the Commission intends to update it to reflect the rapid advances in AI7.
- First draft (14 Nov 2024): The Code establishes an initial framework for AI model providers, proposing general commitments regarding transparency and risk management. It serves as a “foundation” and highlights the necessary multi-stakeholder collaboration and the development of a “future-proof” code.
- Second Draft (19 Dec 2024): The Code incorporates numerous feedback items from experts and stakeholders. The first part of the draft Code defines transparency and copyright obligations for AI model providers, with exemptions for certain open-source models. The second part, addressing systemic risks, deals with risk assessment, risk mitigation measures, and cybersecurity obligations.
- Third Draft (March 11, 2025): This version proposes a more finalized structure with a division into sections (Transparency, Copyright, Security and Systemic Risks), as well as specific provisions for high-risk general-purpose AI with systemic risk models (GPAISR – General-Purpose AI with Systemic Risk) and a “Safety and Security Framework.” The latter will apply to the GPAISR of the parties to the Code and will detail the assessment of systemic risks, the mitigation of those same risks, as well as measures and procedures for risk management that the parties envisage adopting to maintain systemic risks related to their GPAISR at acceptable levels8.
Given the multiplication of artificial intelligence projects in companies, cyber risk prevention must become systematic within all AI governance frameworks.
II. Content and obligations of the Code of Best Practices.
The Code aims at two categories:
- General purpose AI models which are models that, after being trained on large amounts of data, can perform a wide variety of tasks. This type of model is sufficiently flexible to be integrated into different systems or applications. It does not concern AI used solely for research, development or prototyping prior to commercialization9.
- Models presenting systemic risk (GPAISR – General-Purpose AI with Systemic Risk)10: this term refers to a risk related to the capabilities of general-purpose AI models, which could have a significant impact on the European market. This risk could affect public health, safety, fundamental rights, or society in general and spread widely throughout the value chain.
For the initial models, the Code provides for transparency obligations and respect of copyright, gathered in the “Transparency Section” and the “Copyright Section” of the Code11. Suppliers are thus required to commit to transparency by providing clear documentation on their models (for example, on the origin and type of data, the capabilities and limitations of AI12) and to comply with copyright obligations13.
For ESRPs, the Code introduces a specific section entitled “Safety and Security Section,” which describes a set of risk assessment and control measures14. This includes, for example, commitments to conduct risk analyses (“Systemic risk assessment […] along the entire model lifecycle […]”) and to establish an appropriate governance structure (“Signatories commit to adopting and implementing a Safety and Security Framework […]”).
III. The Strategic Advantages of the Code
Although this Code does not have a binding nature, it constitutes a valuable tool to assist professionals concerned in complying with obligations related to the PSA. Indeed, although it does not confer a presumption of compliance with the PSA, it offers a framework and practical guidance that helps the parties to respect their obligations, particularly in matters of systemic risk management, security and transparency.
By adhering to this Code, the signatories facilitate the implementation of the PIA, which can reduce the risks of incidents and sanctions while strengthening their compliance efforts.
Indeed, Article 101 of the RIA stipulates explicitly that commitments made within the framework of codes of good practices are taken into account when determining fines. This applies to providers of general-purpose AI models in the event of non-compliance: “The Commission also takes into account commitments made in accordance with Article 93, paragraph 3, or commitments made in relevant codes of good practice pursuant to Article 56.”
That means adhering to the Code of Good Practices can be viewed positively and may be perceived as an effort to comply with the obligations of the SIRE, which could reduce the amount of the fine which can reach 3% of the global annual turnover or 15,000,000 euros.
IV. Who is covered by the European Code of Good Practice on Artificial Intelligence?
The European Code of Good Practices primarily concerns two main categories of actors:
On one hand, there are the providers of general-purpose AI models (General-purpose AI Providers), who are directly subject to the obligations of the Code when they make their solutions available. These actors include technology companies specializing in AI development, as well as software solution publishers. As such, they must, among other things, document the technical characteristics of their solutions (origin of data, decision-making processes, transparency of algorithms), in accordance with the requirements detailed in the Transparency section of the Code.
On the other hand, end-users are indirectly affected by the Code. Although they are not directly signatories, these users have a vested interest in prioritizing compliant models according to the Code, in order to ensure greater legal security and reduce the risk of incidents related to the use of AI in their professional practices. Compliance with the Code may constitute evidence of due diligence in the event of a dispute or audit by competent authorities, strengthening their credibility and regulatory compliance with their clients and institutional partners.
Aumans Avocats: specialists in IT/Data, data protection and DPO outsourcing.
As a law firm specializing in IT/Data and data protection, we are at your disposal to support you with all your projects. Whether you are a startup, a SME or a group of companies, our expertise will allow you to navigate smoothly in the complex landscape of regulation and compliance. Do not hesitate to contact us to benefit from personalized advice and secure your digital future.
Sources:
- https://eur-lex.europa.eu/eli/reg/2024/1689/oj?locale=fr – Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 establishing harmonised rules concerning artificial intelligence (IA Act, AIA). ↩︎
- https://digital-strategy.ec.europa.eu/fr/policies/ai-code-practice – General AI Code of Practice ↩︎
- https://digital-strategy.ec.europa.eu/fr/library/first-draft-general-purpose-ai-code-practice-published-written-independent-experts – Publication of the first draft of the code of practice on general-purpose AI, drafted by independent experts. ↩︎
- https://digital-strategy.ec.europa.eu/fr/library/second-draft-general-purpose-ai-code-practice-published-written-independent-experts – Publication of the second draft of the code of practice on general purpose AI, drafted by independent experts ↩︎
- https://digital-strategy.ec.europa.eu/en/library/third-draft-general-purpose-ai-code-practice-published-written-independent-experts – Third Draft of the General Purpose AI Code of Practice Published, Written by Independent Experts ↩︎
- Ibid, SAFETY AND SECURITY SECTION, “We would like to thank all stakeholders for their valuable input and collaborative stance.” ↩︎
- Ibid, Engagements, “Additional time for consultation and deliberation – both externally and internally – will be needed to further improve the Code” ↩︎
- Ibid, SAFETY AND SECURITY SECTION ↩︎
- https://eur-lex.europa.eu/eli/reg/2024/1689/oj?locale=fr – Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 establishing harmonised rules concerning artificial intelligence (IA Act, RIA), Art 3(63) ↩︎
- Ibid, Art 3(65) ↩︎
- https://digital-strategy.ec.europa.eu/en/library/third-draft-general-purpose-ai-code-practice-published-written-independent-experts – Third Draft of the General Purpose AI Code of Practice Published, Written by Independent Experts / TRANSPARENCY SECTION / COPYRIGHT SECTION ↩︎
- Ibid, TRANSPARENCY SECTION, “Signatories commit to providing additional information necessary to enable downstream providers to have a good understanding of the capabilities and limitations of the general-purpose AI model and to comply with their obligations pursuant to the AI Act.” ↩︎
- Ibid, COPYRIGHT SECTION, ”In order to fulfil the obligation to put in place a policy to comply with Union law on copyright […] Signatories commit to drawing up [..]. implementing a copyright policy […]” ↩︎
- Ibid, SAFETY AND SECURITY SECTION, “The Safety and Security Section of the Code of Practice describes one way in which leading AI companies can comply with the AI Act. The AI Act is a binding AI regulation passed by the European Union in 2024. The part of the AI Act that is concerned with obligations for general-purpose AI models (GPAI) will become effective on August 2, 2025.” ↩︎