As AI technologies progress rapidly, there is an increasing need for tailored regulations that effectively address data provision, sharing, utilization, and knowledge generation. This paper delves into the essential regulations and emphasizes the crucial role of AI model validation in guaranteeing the dependability and effectiveness of AI-driven solutions. An innovative approach is introduced, detailing an organized four-phase methodology for external validation. The integration of these frameworks and the implementation of a DataLab are deemed imperative for fostering transparency, accountability, and enhancing patient outcomes within the swiftly evolving landscape of AI in healthcare. Through a comprehensive examination of key regulations and a structured validation approach, this research underscores the critical need for meticulous scrutiny and validation of AI models to ensure their reliability and efficacy in improving healthcare delivery. This study aims to lay the foundation for further exploration and advancement in this pivotal area, offering a roadmap for stakeholders, researchers, and policymakers to navigate the complexities of AI integration in healthcare while prioritizing patient safety and quality of care. The work has been done in the framework of the GATEKEEPER project, funded by the European Commission under the Horizon 2020 program.
Regulatory Frameworks and Validation Strategies for Advancing Artificial Intelligence in Healthcare / Lopez-Perez, Laura; Merino, Beatriz; Rujas, Miguel; Maccaro, Alessia; Guillén, Sergio; Pecchia, Leandro; Fernanda Cabrera, María; Teresa Arredondo, Maria; Fico, Giuseppe. - 113:(2024), pp. 260-265. (Intervento presentato al convegno Regulatory Frameworks and Validation Strategies for Advancing Artificial Intelligence in Healthcare) [10.1007/978-3-031-61628-0_28].
Regulatory Frameworks and Validation Strategies for Advancing Artificial Intelligence in Healthcare
Alessia Maccaro;Leandro Pecchia;
2024
Abstract
As AI technologies progress rapidly, there is an increasing need for tailored regulations that effectively address data provision, sharing, utilization, and knowledge generation. This paper delves into the essential regulations and emphasizes the crucial role of AI model validation in guaranteeing the dependability and effectiveness of AI-driven solutions. An innovative approach is introduced, detailing an organized four-phase methodology for external validation. The integration of these frameworks and the implementation of a DataLab are deemed imperative for fostering transparency, accountability, and enhancing patient outcomes within the swiftly evolving landscape of AI in healthcare. Through a comprehensive examination of key regulations and a structured validation approach, this research underscores the critical need for meticulous scrutiny and validation of AI models to ensure their reliability and efficacy in improving healthcare delivery. This study aims to lay the foundation for further exploration and advancement in this pivotal area, offering a roadmap for stakeholders, researchers, and policymakers to navigate the complexities of AI integration in healthcare while prioritizing patient safety and quality of care. The work has been done in the framework of the GATEKEEPER project, funded by the European Commission under the Horizon 2020 program.File | Dimensione | Formato | |
---|---|---|---|
GK_EMBEC2024_FINAL (1).pdf
solo utenti autorizzati
Tipologia:
Versione Editoriale (PDF)
Licenza:
Copyright dell'editore
Dimensione
217.61 kB
Formato
Adobe PDF
|
217.61 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.