Discussing the Role of Explainable AI and Evaluation Frameworks for Safe and Effective Integration of Large Language Models in Healthcare

Authors

DOI:

https://doi.org/10.30953/thmt.v9.485

Keywords:

AI, artificial intelligence, Converge2Xcelerate, healthcare, large language models, LLM

Abstract

The integration of artificial intelligence (AI), specifically large language models (LLMs), into healthcare continues to accelerate, necessitating thoughtful evaluation and oversight to ensure safe, ethical, and effective deployment. This editorial summarizes key perspectives from a recent panel conversation among AI experts regarding central issues around implementing LLMs for clinical applications. Key topics covered include: the potential of explainable AI to facilitate transparency and trust; challenges in aligning AI with variable global healthcare protocols; the importance of evaluation via translational and governance frameworks tailored to healthcare contexts; scepticism around overly expansive uses of LLMs for conversational interfaces; and the need to judiciously validate LLMs, considering risk levels. The discussion highlights explainability, evaluation and careful deliberation with healthcare professionals as pivotal to realizing benefits while proactively addressing risks of larger AI adoption in medicine.

Downloads

Download data is not yet available.

References

Telehealth and Medicine Today. ConV2X symposium [Internet]. [cited 2024 Feb 1]. Available from: https://telehealth.conv2xsymposium.com

Wahl B, Cossy-Gantner A, Germann S, Schwalbe NR. Artificial Intelligence (AI) and global health: how can AI contribute to health in resource-poor settings? BMJ Glob. Health. 2018;3:e000798. https://doi.org/10.1136/bmjgh-2018-000798

Char DS, Shah NH, Magnus D. Implementing machine learning in health care—addressing ethical challenges. N Engl J Med. 2018;378:981–3. https://doi.org/10.1056/NEJMp1714229

Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 2019;17. https://doi.org/10.1186/s12916-019-1426-2

Adadi A, Berrada M. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access. 2018;6:52138–60. https://doi.org/10.1109/ACCESS.2018.2870052

Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell. 2019;1:206–15. https://doi.org/10.1038/s42256-019-0048-x

Reddy S, Rogers W, Makinen VP, Coiera E, Brown P, Wenzel M, et al. Evaluation framework to guide implementation of AI systems into healthcare settings. BMJ Health Care Inform. 2021;28(1):e100444. https://doi.org/10.1136/bmjhci-2021-100444

Kalogeropoulos D, Barach P. Telehealth’s role enabling sustainable innovation and circular economies in health. Telehealth Med Today. 2023;8(1). https://doi.org/10.30953/thmt.v8.409

Miller T. Explanation in artificial intelligence: insights from the social sciences. Artif Intell. 2019;267:1–38. https://doi.org/10.1016/j.artint.2018.07.007

Published

2024-04-27

How to Cite

Reddy, S., Lebrun, A., Chee, A., & Kalogeropoulos, D. (2024). Discussing the Role of Explainable AI and Evaluation Frameworks for Safe and Effective Integration of Large Language Models in Healthcare . Telehealth and Medicine Today, 9(2). https://doi.org/10.30953/thmt.v9.485

Issue

Section

Opinions, Perspectives, Commentary

Similar Articles

You may also start an advanced similarity search for this article.

Most read articles by the same author(s)