Discussing the Role of Explainable AI and Evaluation Frameworks for Safe and Effective Integration of Large Language Models in Healthcare
DOI:
https://doi.org/10.30953/thmt.v9.485Keywords:
AI, artificial intelligence, Converge2Xcelerate, healthcare, large language models, LLMAbstract
The integration of artificial intelligence (AI), specifically large language models (LLMs), into healthcare continues to accelerate, necessitating thoughtful evaluation and oversight to ensure safe, ethical, and effective deployment. This editorial summarizes key perspectives from a recent panel conversation among AI experts regarding central issues around implementing LLMs for clinical applications. Key topics covered include: the potential of explainable AI to facilitate transparency and trust; challenges in aligning AI with variable global healthcare protocols; the importance of evaluation via translational and governance frameworks tailored to healthcare contexts; scepticism around overly expansive uses of LLMs for conversational interfaces; and the need to judiciously validate LLMs, considering risk levels. The discussion highlights explainability, evaluation and careful deliberation with healthcare professionals as pivotal to realizing benefits while proactively addressing risks of larger AI adoption in medicine.
Downloads
References
Telehealth and Medicine Today. ConV2X symposium [Internet]. [cited 2024 Feb 1]. Available from: https://telehealth.conv2xsymposium.com
Wahl B, Cossy-Gantner A, Germann S, Schwalbe NR. Artificial Intelligence (AI) and global health: how can AI contribute to health in resource-poor settings? BMJ Glob. Health. 2018;3:e000798. https://doi.org/10.1136/bmjgh-2018-000798
Char DS, Shah NH, Magnus D. Implementing machine learning in health care—addressing ethical challenges. N Engl J Med. 2018;378:981–3. https://doi.org/10.1056/NEJMp1714229
Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 2019;17. https://doi.org/10.1186/s12916-019-1426-2
Adadi A, Berrada M. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access. 2018;6:52138–60. https://doi.org/10.1109/ACCESS.2018.2870052
Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell. 2019;1:206–15. https://doi.org/10.1038/s42256-019-0048-x
Reddy S, Rogers W, Makinen VP, Coiera E, Brown P, Wenzel M, et al. Evaluation framework to guide implementation of AI systems into healthcare settings. BMJ Health Care Inform. 2021;28(1):e100444. https://doi.org/10.1136/bmjhci-2021-100444
Kalogeropoulos D, Barach P. Telehealth’s role enabling sustainable innovation and circular economies in health. Telehealth Med Today. 2023;8(1). https://doi.org/10.30953/thmt.v8.409
Miller T. Explanation in artificial intelligence: insights from the social sciences. Artif Intell. 2019;267:1–38. https://doi.org/10.1016/j.artint.2018.07.007
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Sandeep Reddy, MBBS, MSc, PhD, Alexandre Lebrun, MS, MSEE, Adam Chee, PhD, Dimitrios Kalogeropoulos, PhD
![Creative Commons License](http://i.creativecommons.org/l/by-nc/4.0/88x31.png)
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Authors retain copyright of their work, with first publication rights granted to Telehealth and Medicine Today (THMT).
THMT is published under a Creative Commons Attribution-NonCommercial 4.0 International License.