Marguerite Brac de La Perrière, associate lawyer at LERINS, expert in digital health, is co-author in the Dalloz book “Covid-19, One Health and Artificial Intelligence”, of the chapter “ethico-legal aspects of artificial intelligence in the doctor-patient relationship”, co-authored with the ethicist Jérôme Beranger, associate researcher at CERPOP, Inserm.
The lawyer, a specialist in digital health, deciphers the French legal framework and the core of the European framework under construction concerning the impact of artificial intelligence in the doctor-patient relationship, and the new obligations of the actors who are the publishers of AI systems, the health professionals users, for the benefit of the medical care of patients.
The ethical and legal approach of Artificial Intelligence in the doctor-patient relationship
Summary:
With the emergence of innovations and technological advances – illustrated in particular by the extremely rapid development of Artificial Intelligence (AI) solutions – the ancient medicine of Hippocrates and the doctor-patient relationship have gradually evolved. One of the main sources of change in our healthcare system is computerization, digitization and technical networking, which affects management, organization, and the delivery of care and services. AI applications are interfering in the caregiver-patient relationship, justifying ethical vigilance.
Given the major stakes in terms of quality of care and responsibility, specific legal requirements are emerging in the regulations. In France, the Bioethics Law of August 2, 2021 has already introduced obligations to inform patients and professionals involved in preventive, diagnostic or therapeutic acts, not only about the use of an AI system but also about the resulting interpretation, in order to promote the individual discussion between the patient and the doctor, and to allow the appropriation of the results. In the same sense, a proposal for a European regulation on AI enshrines an obligation of transparency and human control for high-risk AI systems.
A legal look at the doctor-patient relationship in the context of Artificial Intelligence
Among the Artificial Intelligence (AI) tools used in the health field, or intended to be used, the AI systems that crystallise the most fears are those likely to replace the doctor in the performance of diagnostic and therapeutic medical acts.
Indeed, how to ensure that the results provided by algorithms are only used as an aid, and do not lead to a loss of autonomy of doctors and / or an impoverishment of the medical act?
How can we give physicians the means to appreciate the results and, if necessary, to withdraw from them?
In order to provide answers and an initial framework to these risks and fears, a legal framework is necessary.
II. The French legal framework
Thus, specific provisions of the Bioethics Law [28] have been adopted and introduced in art. L4001-3 of the Public Health Code:
I.- The health professional who decides to use, for an act of prevention, diagnosis or care, a medical device comprising an algorithmic data processing whose learning has been carried out from massive data ensures that the person concerned has been informed and that he or she is, if necessary, warned of the resulting interpretation.
II – The health professionals concerned are informed of the use of this data processing. The patient’s data used in this processing and the results obtained are accessible to them.
III – The designers of the algorithmic processing mentioned in I must ensure that its operation can be explained to users.
IV – An order of the minister in charge of health establishes the nature of the medical devices mentioned in I and their terms of use, after consultation with the Haute Autorité de santé and the Commission nationale de l’informatique et des libertés.
These provisions thus aim at human supervision, also called human guarantee, imposing:
– On the one hand, the health professional to inform the patient of the use of an AI solution in the context of a preventive, diagnostic or care act, and where appropriate the result thereof, and ;
– On the other hand, the designer of the algorithm to ensure that its operation is explicable for the users.
The first requirement establishes an obligation for the healthcare professional to inform the patient.
It thus allows the establishment of an exchange between the physician and his patient on the use of an AI solution and the associated result, and thus by the same token, invites the physician to motivate his choice to follow the AI recommendation or to withdraw from it, and to take ownership of the diagnostic or therapeutic medical act.
The second requirement aims to ensure that the physician has the necessary and sufficient information to understand the functioning of the AI solution, and thus to allow him to take ownership of the result, and to deviate from it if necessary.
Finally, by providing “The health professionals concerned are informed of the use of this data processing. The patient’s data used in this treatment and the results that come out of it are accessible to them”, the text establishes an obligation of traceability of the augmented decision, aiming to allow the “professionals concerned”, i.e. those involved in the care, to be able to assess the relevance of the act, “augmented” by the AI solution, of a doctor, and to ensure that he or she has benefited from the necessary autonomy in relation to the algorithm.
III. The European legal framework under construction
The Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on AI (AI legislation), aiming at the development of a single market for legal, safe and trustworthy AI solutions, also proposes to provide a framework for this human supervision, especially in the context of the use of high-risk AI systems – including in particular AI systems constituting or integrated into medical devices – setting requirements for:
– Transparency and provision of information to users [29] “to enable users to interpret the results of the system and to use it appropriately.”
Thus, high-risk AI systems must be accompanied by an instruction manual in an appropriate digital or other format, containing concise, complete, accurate and clear information that is relevant, accessible and understandable to users (with characteristics similar to the requirements of the General Data Protection Regulation with respect to information on the processing of personal data) describing the characteristics of the AI system, including capabilities, limitations, purpose of use, performance, expected lifetime, input data, training, validation, and human control measures including technical measures put in place to facilitate the interpretation of AI system results by users;
– Human control by design[30]: “including through appropriate human-machine interfaces, effective control by natural persons during the period of use of the AI system”, aiming to prevent or minimize risks to health, safety or fundamental rights that may arise lorsqu’un système d’IA à haut risque est utilisé conformément à sa destination ou dans des conditions de mauvaise utilisation raisonnablement prévisible.
Le contrôle humain doit être assuré au moyen de mesures donnant aux personnes chargées d’effectuer un contrôle humain (les professionnels de santé), en fonction des circonstances, la possibilité :
- D’appréhender totalement les capacités et les limites du système d’IA à haut risque et d’être en mesure de surveiller correctement son fonctionnement, afin de pouvoir détecter et traiter dès que possible les signes d’anomalies, de dysfonctionnements et de performances inattendues ;
- D’avoir conscience d’une éventuelle tendance à se fier automatiquement ou excessivement aux résultats produits par un système d’IA à haut risque (« biais d’automatisation ») ;
- D’être en mesure d’interpréter correctement les résultats du système d’IA à haut risque, compte tenu notamment des caractéristiques du système et des outils et méthodes d’interprétation disponibles ;
- D’être en mesure de décider, dans une situation particulière, de ne pas utiliser le système d’IA à haut risque ou de négliger, passer outre ou inverser le résultat fourni par ce système ;
- D’être capable d’intervenir sur le fonctionnement du système d’IA à haut risque ou d’interrompre ce fonctionnement au moyen d’un bouton d’arrêt ou d’une procédure similaire.
With this information and measures, users of high-risk AI systems should be able to use them safely.
In turn, they are required to use them in accordance with the user manuals accompanying the systems to exercise control over the input data, and to ensure that the input data is relevant to the purpose of the high-risk AI system.
They must also monitor the operation of the high-risk AI system on the basis of the instruction manual [31], and when they have reason to consider that use in accordance with the instruction manual may result in the AI system posing a risk, they shall inform the supplier or distributor and suspend the use of the system, similarly when they find a serious incident or malfunction and they shall discontinue use of the AI system.
This also results in obligations to keep and preserve, for a period of time appropriate to the purpose of the systems, the logs generated automatically by this high-risk AI system and use the information provided as part of the transparency requirement of the providers to comply with their obligation to carry out a data protection impact assessment.
The text provides for dissuasive penalties, with non-compliance of the AI system with the transparency and human oversight requirements subject to an administrative fine of up to €20,000,000 or, if the violator is a company, up to 4% of its total worldwide annual turnover in the previous financial year, whichever is higher [32].
The above provisions of the Bioethics Law, and the proposed legislation on AI are in line with the work of the European Commission and are a faithful legal translation of the recommendations of the WHO in its first global report on AI applied to health published on June 28, 2021, and proposing six guiding principles related to its design and use, among which “Protect human autonomy”, “Ensure transparency, clarity and comprehensibility” and “Encourage responsibility and accountability”.
Conclusion:
So-called “4.0” medicine is developing exponentially, relying on Big Data, algorithms and AI systems to move towards individualized, personalized and predictive medicine. AI is transforming practices and exponentiating resources, organization and conditions of care, and significantly impacting the doctor/patient relationship.
In the context of this major transformation, it is appropriate to ask ourselves about AI medicine? What is the place of healthcare professionals and users in this digitized healthcare environment where objects, robots, machines and other autonomous expert systems interfere? What autonomy for health professionals? What is the ethical and legal approach to guarantee the rights and freedoms of individuals?
The challenge is to develop a trusted AI, optimizing the healthcare system, without distorting the relationship between the healthcare provider and the patient, and without affecting the rights and freedoms of patients. In this context, transparency and human supervision in particular appear to be the keystones. Beyond the specific requirements of high-risk AI systems, a systemic and transversal neo-Darwinian approach based on the concept of Ethics by Evolution is required.
Bibliographie :
[1] OMS. (2021). Ethics and Governance of Artificial Intelligence for Health ; Rapport ; 28 juin.
[2] Boyd, D., & Crawford, K. (2012). Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon. Information Communication & Society, 15(5), p. 662–679.
[3] Costa, F. F. (2014). Big data in biomedicine. Drug Discovery Today, 19(4), p. 433–440.
[4] Mittelstadt B. D, Fairweather N. B, McBride N, et Shaw M. (2011). Ethical issues of personal health monitoring: A literature review. In ETHICOMP ; conference proceedings (pp. 313–321). Presented at the ETHICOMP 2011, Sheffield, UK.
[5] Mittelstadt, B. D., Fairweather, N. B., McBride, N., & Shaw, M. (2013). Privacy, risk and personal health monitoring. In ETHICOMP 2013 conference proceedings (pp. 340–351). Presented at the ETHICOMP 2013, Kolding, Denmark.
[6] Niemeijer A. R, Frederiks B. J, Riphagen I. I, Legemaate J, Eefsting J. A, et Hertogh C. M. (2010). Ethical and practical concerns of surveillance technologies in residential care for people with dementia or intellectual disabilities: An overview of the literature. International Psychogeriatrics, 22, p. 1129–1142.
[7] FHF. (2018). Intelligence artificielle : Quels impacts et perspectives pour l’Hôpital ?, le Magazine de la FHF. n°38, Hiver, p. 13 à 17.
[8] Floridi, L, et Taddeo M. (2016). « What Is Data Ethics? » Philosophical Transactions of the Royal Society A-Mathematical Physical and Engineering Sciences 374 (2083): 20160360.
[9] Cath C, Wachter S, Mittelstadt B, Taddeo M, et Floridi L. (2016). Artificial Intelligence and the “Good Society”: The US, EU, and UK Approach. SSRN Scholarly Paper ID 2906249. Rochester, NY: Social Science Research Network.
[10] Ananny M, et Crawford K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society ; 20 (3): 973‑89.
[11] Selbst A, et Barocas S. (2018). The Intuitive Appeal of Explainable Machines. SSRN Scholarly Paper ID 3126971. Rochester, NY: Social Science Research Network.
[12] Kim P. (2016). Data-Driven Discrimination at Work. William & Mary Law Review ; 58: 857‑936.
[13] Friedler S, Scheidegger C, et Venkatasubramanian S. (2016). « On the (im)possibility of fairness ». arXiv:1609.07236 [cs, stat], septembre.
[14] Brundage M, Avin S, Clark J, Toner H, Eckersley P, Garfinkel B, Dafoe A, Scharre P, Zeitzoff T, et Filar B. (2018). « The malicious use of artificial intelligence: Forecasting, prevention, and mitigation ». arXiv preprint arXiv:1802.07228.
[15] Campolo, A, Sanfilippo M, Whittaker M, et Crawford K. (2017). « AI Now 2017 Report ». https://ainowinstitute.org/AI_Now_2017_Report.pdf.
[16] Mittelstadt B, et Floridi L. (2016) b. « The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts ». Science and Engineering Ethics ; 22 (2): 303‑41.
[17] Coeckelbergh M. (2015). Artificial Agents, Good Care, and Modernity. Theoretical Medicine and Bioethics ; 36 (4): 265‑77.
[18] Nakrem S, Solbjør M, Ida Pettersen N, et Kleiven H. (2018). « Care relationships at stake? Home healthcare professionals’ experiences with digital medicine dispensers – a qualitative study ». BMC Health Services Research 18 (janvier).
[19] Voarino N. (2019). Systèmes d’intelligence artificielle et santé : les enjeux d’une innovation responsable. Thèse doctorale ; Faculté de médecine de Montréal ; pp. 356.
[20] Devillers L. (2017). « Des robots et des hommes: Mythes, fantasmes et réalité». Plon ; Paris.
[21] Coeckelbergh M. (2012). « “How I Learned to Love the Robot”: Capabilities, Information Technologies, and Elderly Care ». Dans The Capability Approach, Technology and Design, édité par Ilse Oosterlaken et Jeroen van den Hoven, 77‑86. Philosophy of Engineering and Technology. Dordrecht: Springer Netherlands.
[22] Béranger J. (2021). Comment concilier éthique et intelligence artificielle ? ActuIA ; n°5 ; p.59.
[23] Rougé-Bugat ME et Béranger J. (2021). La relation médecin généraliste-patient face à la numérisation de la médecine. DSIH ; Février.
[24] Rougé-Bugat ME et Béranger J. (2021). Evolution de la relation médecin généraliste-patient à l’heure de la médecine digitale : Cas de la prise en charge du patient atteint de cancer. Les tribunes de la santé ; n°68.
[25] Rougé-Bugat ME et Béranger J. (2021). Evolution et impact du numérique dans la relation médecin généraliste-patient : Cas du patient atteint de cancer. Revue officielle de l’Académie Nationale de Médecine ; Juillet.
[26] Beauchamp T. L, Childress J. (2001). Principles of Biomedical Ethics. 5ème edition ; Oxford University Press, New-York / Oxford.
[27] Hervé C, et Stanton-Jean M. (2018). Innovations en santé publique, des données personnelles aux données massives (Big data) : Asp.
[28] Code la Santé Publique, article L4001-3, issu de la LOI n° 2021-1017 du 2 août 2021 relative à la bioéthique, article 17.
[29] Projet de législation sur l’intelligence artificielle, article 13.
[30] Projet de législation sur l’intelligence artificielle, article 14.
[31] Projet de législation sur l’intelligence artificielle, article 29.
[32] Projet de législation sur l’intelligence artificielle, article 71, 4.