Use of AI Systems in healthcare: how to meet multiple information obligations?

Healthcare professionals are increasingly using artificial intelligence systems (hereinafter “AI systems”) in their practices and in patient care.

However, as deployers of these systems (under the AI Act) and as data controllers of their patient’s personal data (under the GDPR), healthcare professionals are subject to several information obligations. These are further complemented by information obligations applicable to medical care under the French Public Health Code.

This article aims to provide guidance to professionals to help them identify and comply with their various information obligations toward patients.

I. Legal framework applicable to the use of AI system in healthcare

Communication with patients is governed by several legal texts. The French Public Health Code (hereinafter “PHC”) requires informing the patient about their health condition and the care proposed (Article L.1111-2 PHC).

Since 2021, article L.4001-3 PHC also provides a specific right to information when a doctor uses a medical device with an “algorithmic function” in the context of prevention, diagnosis, or care (definition that may correspond to an AI system).

In addition, the GDPR requires to any data controller (here, the healthcare institution or practitioner) to inform individuals about the characteristics of the processing of their personal data in accordance with articles 12 to 14 of the GDPR (purpose of processing, categories of data collected, recipients, retention period, individual rights, etc.). The EDPB (formerly the WP29) has also adopted guidelines on transparency measures toward data subjects.

The draft guidelines recently published by the French National Authority for Health (HAS) and the CNIL (currently under public consultation) on the use of AI system in healthcare include a section dedicated to informing individuals and consent. These guidelines acknowledge the multiplicity of information obligations imposed to healthcare professionals.

Furthermore, the AI Act establishes a right to explanation for any individual decision based on an AI system (Article 86 AI Act), as well as the possibility for patients to file a complaint with the supervisory authority (Article 85 AI Act).

These rights are in addition to those provided under the GDPR (access, rectification, erasure, restriction, objection and the right to file a complaint to the CNIL).

If the patient interacts directly with an AI system (e.g., a chatbot providing medical information), they must also be clearly informed that responses are generated by an AI system (Article 50 AI Act).

Beyond these obligations, the HAS and the CNIL point out that transparency is essential to demystify AI systems and maintain trust between patients and healthcare professionals.

European and French legal frameworks thus provide a detailed set of rules regarding information obligations when using AI systems in healthcare. Transparency operates at several levels.

II. Types of information to provide to patients

It is necessary to distinguish and provide an adapted information at different levels. The mentioned HAS and CNIL draft guidelines identify the following obligations:

  • General information on the use of AI in healthcare: all patients should be informed that AI system may be used in their care journey (e.g., radiological diagnosis, decision support). For example, a simple message may be displayed in handboods or waiting rooms: “As part of your care, artificial intelligence systems may be used to improve your diagnosis. For more information, please visit our website or contact reception.” (Recommendation 7.2).
  • Information on the use of personal data: in accordance with the GDPR, patients must be informed of the identity of the data controller, the purposes of processing, the categories of data used (clinical data, imaging, biological information, etc.), data recipients (AI systems providers, healthcare organizations), retention periods, their rights and the contact details of the DPO. This information must be provided before any data processing and must be clear and easily accessible.
  • Information on secondary use of data: if data collected in the context of care are reused (e.g., improving AI systems models, health research, etc), patients must be informed in advance, at the time of data collection. Such secondary uses constitute distinct processing operations and must independently comply with all GDPR requirements.
  • Information on patient’s rights: patients must be informed of their rights regarding their health data (access, rectification, erasure, objection, restriction, etc.) and how to exercise them (transparency portals, DPO contact, CNIL complaints). The HAS and CNIL guidelines also refer to the right to file complaints (CNIL, market authorities) and the right to explanation if an individual decision has been made using an AI system (Article 86 AI Act).

These distinct types of information must be formalized through various forms and supports to ensure effectiveness of transparency obligations.

III. Best practices and methods for providing information

The HAS and CNIL guidelines point out the importance of clear and educational communication, avoiding technical jargon while highlighting the benefits for patients (efficiency, safety, support, speed, etc.).

Traceability of information measures, particularly to demonstrate compliance by healthcare professionals or institutions, can be ensured, for example, by recording them in the patient’s medical file. Healthcare staff also play a key role in delivering information, adapting it to each patient’s characteristics and expectations.

The guidelines propose a three-level approach to information:

  • General information: informational messages should be provided to all patients in waiting rooms, on websites, posters, or other media to indicate the possible use of AI in their care journey.
  • Specific information to the individual: when an AI system has been used in a patient’s care journey, medical reports or documentation should include details such as the use of the AI system, its purpose, its name and version, and the doctor who validated it. Additional tools such as videos or explanatory handbooks may also be used.
  • Comprehensive prior information: in cases involving high-risk AI system as defined by the AI Act, full information must be provided before the system is used if its use may pose risks to individual’s rights.

More broadly, it is important to consider the variety of formats and supports for providing information. Delivering information in multiple formats and at different stages of the care journey helps patients better understand how AI system works and how their personal data are processed.

Practical aspect such as the choice of communication media, staff training, and procedures ensuring the effectiveness of exercise rights are essential measures that healthcare institutions must implement. These practical aspects will be the subject of a dedicated webinar* co-hosted by Aumans Avocats and Adequacy on April 16, 2026, aimed at helping institutions develop AI systems projects and ensure regulatory compliance.

Register for the Aumans Avocats and Adequacy webinar
“Ensure the compliance of your AI projects in healthcare”
April 16 2026

* Please note that the webinar will be broadcast in French language only.

AUMANS AVOCATS (formerly FOUSSAT AVOCATS & DEROULEZ AVOCATS)
AARPI
Paris +33 (0)1 85 08 54 76 / Lyon +33 (0)4 28 29 14 92 /
Marseille 
+33 (0)4 84 25 67 89 / Bruxelles +32 (0)2 318 18 36

Contact us

Categories

Share

Related Articles

RGDP Définition

What is GDPR?

Introduction : The main principles of the GDPR. Today we will discuss the principles of the GDPR. When we talk

Read more »