Use this link to cite:
http://hdl.handle.net/2183/42231 Mitigating patient harm risks: A proposal of requirements for AI in healthcare
Loading...
Identifiers
Publication date
Authors
García-Gómez, Juan M.
Blanes-Selva, Vicent
Álvarez Romero, Celia
de Bartolomé Cenzano, José Carlos
Pereira Mesquita, Felipe
Doñate-Martínez, Ascensión
Advisors
Other responsabilities
Journal Title
Bibliographic citation
J. M. Garcia-Gomez, V. Blanes-Selva, C. Alvarez Romero, J. C. de Bartolomé Cenzano, F. Pereira Mesquita, A. Pazos, and A. Doñate-Martínez, "Mitigating patient harm risks: A proposal of requirements for AI in healthcare", Artificial Intelligence in Medicine, Vol. 167, Sept. 2025, 103168, https://doi.org/10.1016/j.artmed.2025.103168
Type of academic work
Academic degree
Abstract
[Abstract]: With the rise Artificial Intelligence (AI), mitigation strategies may be needed to integrate AI-enabled medical software responsibly, ensuring ethical alignment and patient safety. This study examines how to mitigate the key risks identified by the European Parliamentary Research Service (EPRS). For that, we discuss how complementary risk-mitigation requirements may ensure the main aspects of AI in Healthcare: Reliability - Continuous performance evaluation, Continuous usability test, Encryption and use of field-tested libraries, Semantic interoperability -, Transparency - AI passport, eXplainable AI, Data quality assessment, Bias Check -, Traceability - User management, Audit trail, Review of cases-, and Responsibility - Regulation check, Academic use only disclaimer, Clinicians double check -. A survey conducted among 216 Medical ICT professionals (medical doctors, ICT staff and complementary profiles) between March and June 2024 revealed these requirements were perceived positive by all profiles. Responders deemed explainable AI and data quality assessment essential for transparency; audit trail for traceability; and regulatory compliance and clinician double check for responsibility. Clinicians rated the following requirements more relevant (p < 0.05) than technicians: continuous performance assessment, usability testing, encryption, AI passport, retrospective case review, and academic use check. Additionally, users found the AI passport more relevant for transparency than decision-makers (p < 0.05). We trust that this proposal can serve as a starting point to endow the future AI systems in medical practice with requirements to ensure their ethical deployment.
Description
The following are the supplementary data related to this article.
Annex 1. Map of the proposed fourteen requirements of the “Mitigating Patient Harm risks: a proposal of requirements for AI in Healthcare” study to the mitigation actions proposed by the Directorate General for Parliamentary Research Services (EPRS). https://ars.els-cdn.com/content/image/1-s2.0-S0933365725001034-mmc1.pdf
Annex 2. Fourteen requirements of the “Mitigating Patient Harm risks: a proposal of requirements for AI in Healthcare” study. https://ars.els-cdn.com/content/image/1-s2.0-S0933365725001034-mmc2.pdf
Annex 3. Type of the fourteen requirements of the “Mitigating Patient Harm risks: a proposal of requirements for AI in Healthcare” study. https://ars.els-cdn.com/content/image/1-s2.0-S0933365725001034-mmc3.pdf
Annex 4. Medical ICT sector opinion survey on proposed requirements to mitigate potential patient risk. https://ars.els-cdn.com/content/image/1-s2.0-S0933365725001034-mmc4.pdf
Editor version
Rights
Atribución 3.0 España








