SPECIAL ISSUE

Artificial Intelligence and Healthcare Regulatory and Legal Concerns

K. Ganapathy*, M.Ch (NEURO) FACS FICS FAMS PhD

Director, Apollo Telemedicine Networking Foundation, Chennai, India

Abstract

We are in a stage of transition as artificial intelligence (AI) is increasingly being used in healthcare across the world. Transitions offer opportunities compounded with difficulties. It is universally accepted that regulations and the law can never keep up with the exponential growth of technology. This paper discusses liability issues when AI is deployed in healthcare. Ever-changing, futuristic, user friendly, uncomplicated regulatory requirements promoting compliance and adherence are needed. Regulators have to understand that software itself could be a software as a medical device (SaMD). Benefits of AI could be delayed if slow, expensive clinical trials are mandated. Regulations should distinguish between diagnostic errors, malfunction of technology, or errors due to initial use of inaccurate/inappropriate data as training data sets. The sharing of responsibility and accountability when implementation of an AI-based recommendation causes clinical problems is not clear. Legislation is necessary to allow apportionment of damages consequent to malfunction of an AI-enabled system. Product liability is ascribed to defective equipment and medical devices. However, Watson, the AI-enabled supercomputer, is treated as a consulting physician and not categorised as a product. In India, algorithms cannot be patented. There are no specific laws enacted to deal with AI in healthcare. DISHA or the Digital Information Security in Healthcare Act when implemented in India would hopefully cover some issues. Ultimately, the law is interpreted contextually and perceptions could be different among patients, clinicians and the legal system. This communication is to create the necessary awareness among all stakeholders.

Keywords: algorithm; artificial intelligence; Digital Information Security in Healthcare Act; regulatory requirements; software

 

Citation: Telehealth and Medicine Today 2021, 6: 252 - http://dx.doi.org/10.30953/tmt.v6.252

Copyright: © 2021 The Authors. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (https://creativecommons.org/licenses/by-nc/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.

Published: 23 April 2021

Competing interests and funding: There is no conflict of interest. No funding was received to support this work.

*Correspondence: K. Ganapathy. Email: drganapathy@apollohospitals.com

 

A century ago, electricity transformed several industries. Today, artificial intelligence (AI) has the potential to radically change every discipline. Using AI to reach a patient is no longer a question of ‘if’ – it is a question of ‘how’ and a matter of now!

AI refers to the collection of technologies that help equip machines with higher levels of intelligence to perform tasks such as perceiving, learning, problem-solving, and decision making. AI-based systems ride upon three waves: miniaturization of computing power, networking of sensors, and devices and affordable internet access. The first wave put the computation power of mainframes in the hands of ordinary citizens, the second generated massive amounts of data with unprecedented granularity, and the third made all this universally accessible.

Advanced algorithms, large data sets, and powerful computing power are now leveraging technology to assist patient care. Complex cognitive tasks and real time complex data analysis are now a reality (1). Information gathering, processing, learning, and reasoning are the hallmarks of AI (2).

Alan Turing recognised this as early as 1950. One of the founders of modern computers and AI, the ‘Turing Test’ presupposes that intelligent behaviour of a computer comprises the ability to achieve human-level performance in cognition-related tasks (3). Even after making allowance for an unprecedented hype, it is an undeniable fact that in the coming decade, deployment of AI will cause a paradigm shift in healthcare delivery. Powerful AI techniques can unlock clinically relevant information, hidden in massive amounts of data. Like other disruptive technologies the potential for impact should not be underestimated. As Gartner remarked (4),

Physicians must begin to trust use of AI, so they are comfortable using it to augment their clinical decision making. There is so much information when making medical and diagnostic decisions that it is truly beyond the cognitive capabilities of the human brain to process it all’.

Evidence-based medicine, leading to clinical decision relies on insights from past data. AI is able to learn from each incremental case and can be exposed, within minutes, to more cases than a clinician could see in many lifetimes (5). Robust, prospective clinical evaluation is essential to ensure that AI systems are safe and effective. Clinically applicable performance metrics should include how AI affects the quality of care, the variability of healthcare professionals, the efficiency and productivity of clinical practice, and most importantly patient outcomes (6).

Reliability of training data sets

A clinician is expected to know the answers when asked why a specific management option is recommended. Similarly, when clinicians use a specific AI algorithm, even if approved, they should ideally know how the training, testing, and validation were done with the numbers in each group. They must be convinced that the data used initially to develop the algorithm are truly representative of the clinical gold standard.

The machine learning (ML) algorithm (AI) to detect papilledema trained on 14,341 fundus photographs using a retrospective dataset (BONSAI), externally tested the model with 1,505 fundus photographs from another retrospective dataset (7). Of the 82 clinical AI studies reviewed in two systematic reviews and meta-analysis, only 11 were prospective and only seven were Randomized Control Trial (RCTs) (8). The European Union General Data Protection regulation legislation in fact requires that ML predictions be explainable, especially those that have the potential to affect users significantly. Explainable ML models instil confidence and are likely to result in faster adoption in clinical settings. There is a growing interest in interpretable models in deep learning (9). ML models can make errors in output that may be hard to foresee. Such errors if undetected during a regulatory approval process may lead to catastrophic consequences when AI models are allowed to deploy at scale (10).

Role of trust in regulating AI in healthcare

Patients believe that their needs are unique and cannot be adequately addressed by algorithms. IBM’s Watson diagnoses heart diseases better than cardiologists. Chatbots dispense medical advice for the United Kingdom’s National Health Service in lieu of nurses. Smartphone apps detect skin cancer with the accuracy of experts. Algorithms identify eye diseases as accurately as ophthalmologists. It is believed that medical AI will pervade 90% of hospitals and replace 80% of what doctors currently do. However, the healthcare system will need to convince patients that the clinician still makes the ultimate decision (11). Trust is the key word for both clinicians and patients. Educating medical professionals on AI systems has been suggested. An educated informed consent needs to be given by the patient and caregiver prior to the clinician using AI until AI is accepted as the ‘standard of care’ (12). With its ability to integrate and learn from large sets of clinical data, AI can help in diagnosis, clinical decision making, and personalized medicine (13). Standards need to be created and met and limitations on use of AI should also be emphasized. AI should be used to reduce not increase health inequality – geographically, economically and socially.

Discussion

Can a doctor overrule a machine’s diagnosis or decision and vice versa? Who is responsible for preventing malicious attacks on algorithms? AI systems are becoming more autonomous resulting in a greater degree of direct-to-patient advice, bypassing human intervention. The clinician’s role in maintaining quality, safety, patient education, and holistic support therefore becomes even more necessary. Utilization of AI would have a psychological impact on both patients and doctors, changing the doctor–patient relationship. The doctor now needs to learn to interact with ‘expert’ patients, who have direct access to AI tools. Will clinicians bear the psychological stress and consequences, if an AI decision results in harm to the patient? Could AI ‘replacing’ a doctor’s advice diminish the value of clinicians, reducing trust. If AI and the doctor disagree, who will be perceived as ‘right’? The degree of relative trust held in technology and in healthcare professionals may differ between individuals, generations and at different times. There are models of ‘peaceful co-existence’–autopilots on planes for example have improved airline safety without compromising the training of pilots. The same could apply to healthcare (14). Many deep learning algorithms used for image analysis are difficult to understand and explain to a patient. The greatest challenge to AI in healthcare domains is ensuring their adoption in daily clinical practice (15).

Regulatory issues

Developing regulatory requirements for using AI technology in healthcare, to comply and adhere to an ever-changing future ready evidence based environment is a challenge (16). Regulators were once considered a hurdle for AI and associated technologies. It was soon realized that stand-alone algorithms could act as a ‘medical device’. U.S. Food and Drug Administration (FDA) has asserted its ability and intent to regulate AI in the healthcare system. The FDA launched a digital health division in 2019 with new regulatory standards for AI-based technologies. Of the 64 AI-based FDA approved medical technologies, only 29 had AI-related terms or expressions mentioned in the FDA announcements. The International Medical Device Regulators Forum (IMDRF) defines ‘Software as a Medical Device (SaMD)’ as software intended to be used for one or more medical purposes, without being part of a hardware medical device. AI/ML-based SaMD (Software as a Medical Device) will deliver safe and effective software functionality improving quality of care.

The U.S. FDA has made significant strides in developing policies tailored for SaMD. This ensures that safe and effective technology will reach patients and healthcare professionals. Manufacturers need to submit a marketing application to FDA prior to initial distribution of their medical device. These AI/ML-based software, when intended to treat, diagnose, cure, mitigate, or prevent disease or other conditions, are medical devices under the Food, Drug, and Cosmetic (FD&C) Act, and are termed SaMD by the FDA and the IMDRF (17). FDA is regulating black-box medical algorithms. Benefits of black-box medicine – quick, cheap shortcuts to otherwise inaccessible medical knowledge – would be seriously delayed or even curtailed if slow, ponderous, expensive clinical trials are required. Traditional methods of testing new medical technologies and devices may not always work and may even slow or stifle innovation (18).

The regulatory framework in most countries does not keep abreast with developments in AI. Intellectual property laws in India at present do not recognize patentability of algorithms – the basis on which an AI solution functions. The Patents Act expressly exempts algorithms from being ‘inventions’ eligible for patent protection. This may be a disincentive in development of AI solutions (19). With appropriately tailored regulatory oversight, AI/ML-based SaMD will deliver safe and effective software functionality improving quality of care. Regulatory issues and adoption by the healthcare provider and the beneficiary could be perceived as barriers. AI needs to undergo extensive clinical validation before it is fully integrated into the core of the healthcare delivery system (20).

Legal issues

The law will need to catch up and keep pace with new innovations deploying AI, to fully exploit the potential of AI. This is particularly relevant in the healthcare arena. Several legal issues arise as no specific laws have been enacted to deal with AI. Existing regulations do not distinguish between cases where there is an error in diagnosis, malfunction of a technology or original use of inaccurate or inappropriate data for the training database. The software developer or the specific program design engineer are not liable. It is also not clear how one determines the degree of accountability of a medical professional when the wrong diagnosis or treatment is due to a glitch in the system or an error in data entry. Lack of adequate data privacy laws in many countries could result in such data sets being commercially exploited for other purposes. Will the clinician also be implicated?

There is ongoing debate about who will be held liable when robots and AI, acting autonomously, harm patients. Current consensus states that the professional is open to liability if he or she used the tool in a situation outside the scope of its regulatory approval, or misused it or applied it despite significant professional doubts of the validity of the evidence surrounding the tool, or with knowledge of the toolmaker’s obfuscating negative facts. In other cases, the liability falls back on the creators and the companies behind them. However, the interpretation of ‘the law’ could differ depending on so many variables. This is a grey area unlikely to be resolved soon.

The standard of care in clinical settings may, in the future, include ML diagnostics, particularly when AI-enabled tools are demonstrating a higher precision rate compared to an experienced super specialist. From a legal perspective, the decision to rely on AI itself will be a human medical judgment, like any other judgement. Once ML itself becomes the standard of care, ML will raise the bar (21). A higher level of accuracy could be the new standard, but the malpractice exposure of ML-users will actually reduce because by relying on ML they will be complying with the new ‘higher’ professional standard (22). ‘Automated decision-making’ for example means a decision that is made – without any human involvement – solely by automated means (23). In the real world, terms do not always have such an unambiguous explicit meaning.

AI involves analysis of voluminous data to discern patterns, which are then used to predict likelihood of future occurrences. These data sets about individual’s health can come from electronic health records, health insurance claims, purchasing records, income data, criminal records, and even social media (24). Medical malpractice and product liability legal issues could arise with the use of ‘black-box’ algorithms, as users may not be able to provide a logical explanation of how the algorithm was arrived at initially. Appropriate legislation is necessary to allow the apportionment of damages consequent to actions of an AI-enabled system. AI systems need to develop ‘moral’ and ‘ethical’ behaviour patterns aligned with human interests. Adapting existing principles and precedents to the imminent new problems of whether a robot can be sued for malpractice will not solve the problem. Standards need to be defined for robots also. Vicarious responsibility could include the human surgeon overseeing the robot, the company manufacturing the robot and the specific engineer who designed it. The culpability of each of the protagonists also needs to be taken into account. Product liability is ascribed to defective equipment and medical devices, which healthcare providers may use. Watson the AI-enabled supercomputer is considered equivalent to a consulting physician and not categorised as a product (25).

It has been pointed out that ethical, legal, and cultural factors need to be considered by developers, practitioners, and policy makers when designing, using, and regulating e-health platforms (26). The Right to Privacy has been declared a fundamental right by the Supreme Court of India. The Srikrishna Committee constituted for recommendations on data privacy and its management drafted a bill – The Personal Data Protection Bill, 2018. This is the first step towards India’s Data privacy journey. The Ministry of Health and Family Planning, Government of India is in the midst of enacting a sector-specific legislation called DISHA or Digital Information Security in Healthcare Act. All these are relevant to the growth and development of AI in healthcare in India.

Physicians may need to give reasons to their patients if they plan to override the AI recommendation. This carries unique legal and ethical challenges, more so if the physician is unaware of the algorithms – the basis of the AI recommendation. If complications ensue, the particular process of clinical decision-making itself may be perceived differently by patients, peers, and the legal system (27).

Conclusion

One wonders how Sir William Osler, who in 1890 opined that medicine is a science of uncertainty and an art of probability, would have reacted to the introduction of AI in healthcare (28). For centuries, practicing medicine, involved acquiring as much data about the patient’s health or disease as possible and taking decisions. Wisdom presupposed experience, judgement, and problem-solving skills using rudimentary tools and limited resources. AI will and should never ever replace a commiserating clinician. Hopefully, the AI-enabled clinician will now spend more time empathising with his patient rather than getting drowned in voluminous data. He will no longer be spending time extracting meaningful data. He will spend time productively managing data extracted by AI.

Acknowledgements

The author thanks Ms. Lakshmi for providing secretarial assistance.

References

  1. Chouffani RC. AI in healthcare: Beyond IBM Watson. TechTarget. 2017 Available from: http://media.techtarget.com/digitalguide/images/Misc/EA-Marketing/Eguides/AI-in-Healthcare.pdf [cited 18 January 2021].
  2. Fomenko A, Lozano A. Artificial intelligence in neurosurgery. UTMJ 2019; 96: 19–21.
  3. Amisha, Malik P, Pathania M, Rathaur VK. Overview of artificial intelligence in medicine. J Fam Med Prim Care 2019; 8: 2328–31. doi: 10.4103/jfmpc.jfmpc_440_19
  4. Hedges L. The future of AI in healthcare. Software advice. 2020 Available from: https://www.softwareadvice.com/resources/future-of-ai-in-healthcare/ [cited 18 January 2021].
  5. Buch VH, Ahmed I, Maruthappu M. Artificial intelligence in medicine: current trends and future possibilities. Br J Gen Pract 2018; 68: 143–4. doi: 10.3399/bjgp18X695213
  6. Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med 2019; 17: 1–9. doi: 10.1186/s12916-019-1426-2
  7. AI Report. Artificial intelligence for authentic engagement. Syneos Health Communication. 2018 Available from: http://syneoshealthcommunications.com/perspectives/artificial-intelligence [cited 18 January 2021].
  8. Topol EJ. Welcoming new guidelines for AI clinical research. Nat Med 2020; 26: 1318–20. doi: 10.1038/s41591-020-1042-x
  9. Google’s medical AI was super accurate in a lab. Real life was a different story. MIT Technological Review. Available from: https://www.technologyreview.com/2020/04/27/1000658/google-medical-ai-accurate-lab-real-life-clinic-covid-diabetes-retina-disease/ [cited 18 January 2021].
  10. Oakden-Rayner L, Dunnmon J, Carniero G, Re C. Hidden Stratification causes clinically meaningful failures in machine learning for medical imaging. arXiv 2019. Available from: http://arxiv.org/abs/1909.12475 [cited 18 January 2021].
  11. AI can outperform doctors. So why don’t patients trust it? Harvard Business Review. Available from: https://hbr.org/2019/10/ai-can-outperform-doctors-so-why-dont-patients-trust-it [cited 18 January 2021].
  12. LaRosa E, Danks D. Impacts on trust of healthcare AI. Proceedings of the 2018 AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, New Orleans, LA, 2–3 February 2018. Available from: https://dl.acm.org/doi/10.1145/3278721.3278771 [cited 18 January 2021].
  13. Rigby MJ. Ethical dimensions of using artificial intelligence in health care. AMA J Ethics 2019; 21: 121–4. doi: 10.1001/amajethics.2019.121
  14. AI Report. Artificial intelligence in healthcare. Academy of Medical Royal Colleges. 2019 Available from: https://www.aomrc.org.uk/wp-content/uploads/2019/01/Artificial_intelligence_in_healthcare_0119.pdf [cited 18 January 2021].
  15. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J 2019; 6: 94–8. doi: 10.7861/futurehosp.6-2-94
  16. Asan O, Bayrak AE, Choudhury A. Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res 2020; 22: e15154. doi: 10.2196/15154
  17. Department of Food and Drug Administration, USA. Proposed regulatory framework for modifications to artificial intelligence/machine learning (AI/ML)-based software as a medical device (SaMD) – discussion paper and request for feedback. Regulations.gov. 2019 Available from: https://www.regulations.gov/document?D=FDA-2019-N-1185-0001 [cited 18 January 2021].
  18. Price WN II. Artificial intelligence in health care: applications and legal implications. University of Michigan Law School. The SciTech Lawyer 14. 2017 Available from: https://repository.law.umich.edu/articles/1932/ [cited 18 January 2021].
  19. Narsana B, Lokhandwala M. Artificial intelligence in healthcare: applications and legal implications. ET Healthworld.com. 2018 Available from: https://health.economictimes.indiatimes.com/news/industry/artificial-intelligence-in-healthcare-applications-and-legal-implications/66690368 [cited 18 January 2021].
  20. Muoio D. Clinical AI’s limitations: some are short-term, others are unavoidable. Mobihealth News. 2020 Available from: https://www.mobihealthnews.com/news/clinical-ais-limitations-some-are-short-term-others-are-unavoidable [cited 18 January 2021].
  21. Case study supplied by Philips Healthcare. Discover the benefits of AI in healthcare. Imaging Technology News. 2018 Available from: https://www.itnonline.com/content/discover-benefits-ai-healthcare [cited 18 January 2021].
  22. Froomkin A, Kerr I, Pineau J. When AIs outperform doctors: confronting the challenges of a tort-induced over-reliance on machine learning. Ariz L Rev 2019; 61: 33. doi: 10.2139/ssrn.3114347
  23. Gerke S, Minssen T, Cohen G. Ethical and legal challenges of artificial intelligence-driven healthcare. Artif Intell Healthc 2020: 295–336. doi: https://doi.org/10.1016/B978-0-12-818438-7.00012-5
  24. Hoffman S. Artificial intelligence in medicine raises legal and ethical concerns. The Conversation. 2019 Available from: https://theconversation.com/artificial-intelligence-in-medicine-raises-legal-and-ethical-concerns-122504 [cited 18 January 2021].
  25. Lupton M. Some ethical and legal consequences of the application of artificial intelligence in the field of medicine. Trends Med 2018; 18: 3–7. doi: 10.15761/TiM.1000147
  26. Singh S. The medico-legal discussion of the digital and AI revolution in the Indian healthcare industry. New Age Healthcare. 2020 Available from: https://newagehealthcare.in/2020/01/06/the-medico-legal-discussion-of-the-digital-and-ai-revolution-in-the-indian-healthcare-industry/ [cited 18 January 2021].
  27. Tabriz A. Medico-legal perils of artificial intelligence and deep learning. Data Driven Investor. 2019 Available from: https://www.datadriveninvestor.com/2019/10/24/medico-legal-perils-of-artificial-intelligence-and-deep-learning/# [cited 18 January 2021].
  28. Williams D. An ode to Osler: a physician profile. Resident Student Organization. 2017 Available from: https://www.acoep-rso.org/the-fast-track/an-ode-to-osler-a-physician-profile/ [cited 18 January 2021].