Marguerite Brac de La Perrière, partner in Digital & Data at Lerins, expert in Digital Health, will discuss in this paper, artificial intelligence and decision support algorithms and specify the conditions under which to meet the rights of patients. Indeed, it is important to remember that at the heart of the design of these artificial intelligences, as paradoxical as it may seem, the key word remains human.
Artificial intelligence, decision support and therapeutic algorithms, what information should be provided to patients?
After having respectively addressed in the last two sections(1), the obligations of health professionals, and those of health institutions using artificial intelligence (AI) tools in the context of prevention, diagnosis or care, this paper will specify the conditions in which patients’ rights can be met.
As explained in the last sections, if there is a key word concerning artificial intelligence, and as paradoxical as it may seem, it is the human being, who must remain at the heart of each step of the design, development and deployment of any artificial intelligence system, which is guaranteed by the legal framework and the soft law.
This is why, almost a year ago, the WHO defined six principles to ensure that, in all countries, AI in the field of health works in the public interest, including in particular the autonomy of the human being, and transparency.
“AI is a tool that should be at the service of people and a positive force for society to ultimately increase human well-being”, these are the rationale and objectives of the proposed Regulation on Artificial Intelligence, which is fully in line with the Commission’s overall Digital Agenda in that it contributes to promoting technologies that serve people. The proposal thus establishes a coherent, effective and proportionate framework to ensure that AI is developed in a way that respects people’s rights and gains their trust.
The transparency obligations are thus intended to enable people to exercise their right to effective redress and transparency.
It is in the same sense that the European principles for the ethics of digital health were adopted by France in its capacity as President of the Council of the European Union, in particular the 4th: “when artificial intelligence is implemented, the maximum has been done to ensure that it is explicable and without discriminatory bias”.
Similarly, the provisions of the Bioethics Law, transposed into the Public Health Code, have established the principle of an obligation for the health professional to inform the patient about
– the use, “for an act of prevention, diagnosis or care, of a medical device comprising an algorithmic data processing whose learning was carried out from massive data”;
– the resulting interpretation, which means that the patient must also benefit from an insight that allows him or her to appreciate it.
The information made available to patients within healthcare institutions, notably within the confidentiality policies, will therefore have to be completed, depending on the services using AI, with specific information to be provided by healthcare professionals to patients, concerning the use of an AI system, notably its context of use, its purpose, its reliability, its performance, and its specifications concerning the data used.
Finally, the data subject has the right not to be subject to a decision based exclusively on automated processing that produces legal effects concerning him or her or significantly affecting him or her in a similar way, under the GDPR, echoing the human control enshrined in the draft AI regulation.
As everyone will have understood, transparency, and therefore information, is the main obligation of designers towards users, and of users towards patients.
Let’s remember that the penalties for data protection and artificial intelligence are similar, although the latter can be as high as €20,000,000 for transparency requirements.