SPECIAL ISSUE
K. Ganapathy*, M.Ch (NEURO) FACS FICS FAMS PhD
Director, Apollo Telemedicine Networking Foundation, Chennai, India
We are in a stage of transition as artificial intelligence (AI) is increasingly being used in healthcare across the world. Transitions offer opportunities compounded with difficulties. It is universally accepted that regulations and the law can never keep up with the exponential growth of technology. This paper discusses liability issues when AI is deployed in healthcare. Ever-changing, futuristic, user friendly, uncomplicated regulatory requirements promoting compliance and adherence are needed. Regulators have to understand that software itself could be a software as a medical device (SaMD). Benefits of AI could be delayed if slow, expensive clinical trials are mandated. Regulations should distinguish between diagnostic errors, malfunction of technology, or errors due to initial use of inaccurate/inappropriate data as training data sets. The sharing of responsibility and accountability when implementation of an AI-based recommendation causes clinical problems is not clear. Legislation is necessary to allow apportionment of damages consequent to malfunction of an AI-enabled system. Product liability is ascribed to defective equipment and medical devices. However, Watson, the AI-enabled supercomputer, is treated as a consulting physician and not categorised as a product. In India, algorithms cannot be patented. There are no specific laws enacted to deal with AI in healthcare. DISHA or the Digital Information Security in Healthcare Act when implemented in India would hopefully cover some issues. Ultimately, the law is interpreted contextually and perceptions could be different among patients, clinicians and the legal system. This communication is to create the necessary awareness among all stakeholders.
Keywords: algorithm; artificial intelligence; Digital Information Security in Healthcare Act; regulatory requirements; software
Citation: Telehealth and Medicine Today 2021, 6: 252 - http://dx.doi.org/10.30953/tmt.v6.252
Copyright: © 2021 The Authors. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (https://creativecommons.org/licenses/by-nc/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.
Published: 23 April 2021
Competing interests and funding: There is no conflict of interest. No funding was received to support this work.
*Correspondence: K. Ganapathy. Email: drganapathy@apollohospitals.com
A century ago, electricity transformed several industries. Today, artificial intelligence (AI) has the potential to radically change every discipline. Using AI to reach a patient is no longer a question of ‘if’ – it is a question of ‘how’ and a matter of now!
AI refers to the collection of technologies that help equip machines with higher levels of intelligence to perform tasks such as perceiving, learning, problem-solving, and decision making. AI-based systems ride upon three waves: miniaturization of computing power, networking of sensors, and devices and affordable internet access. The first wave put the computation power of mainframes in the hands of ordinary citizens, the second generated massive amounts of data with unprecedented granularity, and the third made all this universally accessible.
Advanced algorithms, large data sets, and powerful computing power are now leveraging technology to assist patient care. Complex cognitive tasks and real time complex data analysis are now a reality (1). Information gathering, processing, learning, and reasoning are the hallmarks of AI (2).
Alan Turing recognised this as early as 1950. One of the founders of modern computers and AI, the ‘Turing Test’ presupposes that intelligent behaviour of a computer comprises the ability to achieve human-level performance in cognition-related tasks (3). Even after making allowance for an unprecedented hype, it is an undeniable fact that in the coming decade, deployment of AI will cause a paradigm shift in healthcare delivery. Powerful AI techniques can unlock clinically relevant information, hidden in massive amounts of data. Like other disruptive technologies the potential for impact should not be underestimated. As Gartner remarked (4),
‘Physicians must begin to trust use of AI, so they are comfortable using it to augment their clinical decision making. There is so much information when making medical and diagnostic decisions that it is truly beyond the cognitive capabilities of the human brain to process it all’.
Evidence-based medicine, leading to clinical decision relies on insights from past data. AI is able to learn from each incremental case and can be exposed, within minutes, to more cases than a clinician could see in many lifetimes (5). Robust, prospective clinical evaluation is essential to ensure that AI systems are safe and effective. Clinically applicable performance metrics should include how AI affects the quality of care, the variability of healthcare professionals, the efficiency and productivity of clinical practice, and most importantly patient outcomes (6).
A clinician is expected to know the answers when asked why a specific management option is recommended. Similarly, when clinicians use a specific AI algorithm, even if approved, they should ideally know how the training, testing, and validation were done with the numbers in each group. They must be convinced that the data used initially to develop the algorithm are truly representative of the clinical gold standard.
The machine learning (ML) algorithm (AI) to detect papilledema trained on 14,341 fundus photographs using a retrospective dataset (BONSAI), externally tested the model with 1,505 fundus photographs from another retrospective dataset (7). Of the 82 clinical AI studies reviewed in two systematic reviews and meta-analysis, only 11 were prospective and only seven were Randomized Control Trial (RCTs) (8). The European Union General Data Protection regulation legislation in fact requires that ML predictions be explainable, especially those that have the potential to affect users significantly. Explainable ML models instil confidence and are likely to result in faster adoption in clinical settings. There is a growing interest in interpretable models in deep learning (9). ML models can make errors in output that may be hard to foresee. Such errors if undetected during a regulatory approval process may lead to catastrophic consequences when AI models are allowed to deploy at scale (10).
Patients believe that their needs are unique and cannot be adequately addressed by algorithms. IBM’s Watson diagnoses heart diseases better than cardiologists. Chatbots dispense medical advice for the United Kingdom’s National Health Service in lieu of nurses. Smartphone apps detect skin cancer with the accuracy of experts. Algorithms identify eye diseases as accurately as ophthalmologists. It is believed that medical AI will pervade 90% of hospitals and replace 80% of what doctors currently do. However, the healthcare system will need to convince patients that the clinician still makes the ultimate decision (11). Trust is the key word for both clinicians and patients. Educating medical professionals on AI systems has been suggested. An educated informed consent needs to be given by the patient and caregiver prior to the clinician using AI until AI is accepted as the ‘standard of care’ (12). With its ability to integrate and learn from large sets of clinical data, AI can help in diagnosis, clinical decision making, and personalized medicine (13). Standards need to be created and met and limitations on use of AI should also be emphasized. AI should be used to reduce not increase health inequality – geographically, economically and socially.
Can a doctor overrule a machine’s diagnosis or decision and vice versa? Who is responsible for preventing malicious attacks on algorithms? AI systems are becoming more autonomous resulting in a greater degree of direct-to-patient advice, bypassing human intervention. The clinician’s role in maintaining quality, safety, patient education, and holistic support therefore becomes even more necessary. Utilization of AI would have a psychological impact on both patients and doctors, changing the doctor–patient relationship. The doctor now needs to learn to interact with ‘expert’ patients, who have direct access to AI tools. Will clinicians bear the psychological stress and consequences, if an AI decision results in harm to the patient? Could AI ‘replacing’ a doctor’s advice diminish the value of clinicians, reducing trust. If AI and the doctor disagree, who will be perceived as ‘right’? The degree of relative trust held in technology and in healthcare professionals may differ between individuals, generations and at different times. There are models of ‘peaceful co-existence’–autopilots on planes for example have improved airline safety without compromising the training of pilots. The same could apply to healthcare (14). Many deep learning algorithms used for image analysis are difficult to understand and explain to a patient. The greatest challenge to AI in healthcare domains is ensuring their adoption in daily clinical practice (15).
Developing regulatory requirements for using AI technology in healthcare, to comply and adhere to an ever-changing future ready evidence based environment is a challenge (16). Regulators were once considered a hurdle for AI and associated technologies. It was soon realized that stand-alone algorithms could act as a ‘medical device’. U.S. Food and Drug Administration (FDA) has asserted its ability and intent to regulate AI in the healthcare system. The FDA launched a digital health division in 2019 with new regulatory standards for AI-based technologies. Of the 64 AI-based FDA approved medical technologies, only 29 had AI-related terms or expressions mentioned in the FDA announcements. The International Medical Device Regulators Forum (IMDRF) defines ‘Software as a Medical Device (SaMD)’ as software intended to be used for one or more medical purposes, without being part of a hardware medical device. AI/ML-based SaMD (Software as a Medical Device) will deliver safe and effective software functionality improving quality of care.
The U.S. FDA has made significant strides in developing policies tailored for SaMD. This ensures that safe and effective technology will reach patients and healthcare professionals. Manufacturers need to submit a marketing application to FDA prior to initial distribution of their medical device. These AI/ML-based software, when intended to treat, diagnose, cure, mitigate, or prevent disease or other conditions, are medical devices under the Food, Drug, and Cosmetic (FD&C) Act, and are termed SaMD by the FDA and the IMDRF (17). FDA is regulating black-box medical algorithms. Benefits of black-box medicine – quick, cheap shortcuts to otherwise inaccessible medical knowledge – would be seriously delayed or even curtailed if slow, ponderous, expensive clinical trials are required. Traditional methods of testing new medical technologies and devices may not always work and may even slow or stifle innovation (18).
The regulatory framework in most countries does not keep abreast with developments in AI. Intellectual property laws in India at present do not recognize patentability of algorithms – the basis on which an AI solution functions. The Patents Act expressly exempts algorithms from being ‘inventions’ eligible for patent protection. This may be a disincentive in development of AI solutions (19). With appropriately tailored regulatory oversight, AI/ML-based SaMD will deliver safe and effective software functionality improving quality of care. Regulatory issues and adoption by the healthcare provider and the beneficiary could be perceived as barriers. AI needs to undergo extensive clinical validation before it is fully integrated into the core of the healthcare delivery system (20).
The law will need to catch up and keep pace with new innovations deploying AI, to fully exploit the potential of AI. This is particularly relevant in the healthcare arena. Several legal issues arise as no specific laws have been enacted to deal with AI. Existing regulations do not distinguish between cases where there is an error in diagnosis, malfunction of a technology or original use of inaccurate or inappropriate data for the training database. The software developer or the specific program design engineer are not liable. It is also not clear how one determines the degree of accountability of a medical professional when the wrong diagnosis or treatment is due to a glitch in the system or an error in data entry. Lack of adequate data privacy laws in many countries could result in such data sets being commercially exploited for other purposes. Will the clinician also be implicated?
There is ongoing debate about who will be held liable when robots and AI, acting autonomously, harm patients. Current consensus states that the professional is open to liability if he or she used the tool in a situation outside the scope of its regulatory approval, or misused it or applied it despite significant professional doubts of the validity of the evidence surrounding the tool, or with knowledge of the toolmaker’s obfuscating negative facts. In other cases, the liability falls back on the creators and the companies behind them. However, the interpretation of ‘the law’ could differ depending on so many variables. This is a grey area unlikely to be resolved soon.
The standard of care in clinical settings may, in the future, include ML diagnostics, particularly when AI-enabled tools are demonstrating a higher precision rate compared to an experienced super specialist. From a legal perspective, the decision to rely on AI itself will be a human medical judgment, like any other judgement. Once ML itself becomes the standard of care, ML will raise the bar (21). A higher level of accuracy could be the new standard, but the malpractice exposure of ML-users will actually reduce because by relying on ML they will be complying with the new ‘higher’ professional standard (22). ‘Automated decision-making’ for example means a decision that is made – without any human involvement – solely by automated means (23). In the real world, terms do not always have such an unambiguous explicit meaning.
AI involves analysis of voluminous data to discern patterns, which are then used to predict likelihood of future occurrences. These data sets about individual’s health can come from electronic health records, health insurance claims, purchasing records, income data, criminal records, and even social media (24). Medical malpractice and product liability legal issues could arise with the use of ‘black-box’ algorithms, as users may not be able to provide a logical explanation of how the algorithm was arrived at initially. Appropriate legislation is necessary to allow the apportionment of damages consequent to actions of an AI-enabled system. AI systems need to develop ‘moral’ and ‘ethical’ behaviour patterns aligned with human interests. Adapting existing principles and precedents to the imminent new problems of whether a robot can be sued for malpractice will not solve the problem. Standards need to be defined for robots also. Vicarious responsibility could include the human surgeon overseeing the robot, the company manufacturing the robot and the specific engineer who designed it. The culpability of each of the protagonists also needs to be taken into account. Product liability is ascribed to defective equipment and medical devices, which healthcare providers may use. Watson the AI-enabled supercomputer is considered equivalent to a consulting physician and not categorised as a product (25).
It has been pointed out that ethical, legal, and cultural factors need to be considered by developers, practitioners, and policy makers when designing, using, and regulating e-health platforms (26). The Right to Privacy has been declared a fundamental right by the Supreme Court of India. The Srikrishna Committee constituted for recommendations on data privacy and its management drafted a bill – The Personal Data Protection Bill, 2018. This is the first step towards India’s Data privacy journey. The Ministry of Health and Family Planning, Government of India is in the midst of enacting a sector-specific legislation called DISHA or Digital Information Security in Healthcare Act. All these are relevant to the growth and development of AI in healthcare in India.
Physicians may need to give reasons to their patients if they plan to override the AI recommendation. This carries unique legal and ethical challenges, more so if the physician is unaware of the algorithms – the basis of the AI recommendation. If complications ensue, the particular process of clinical decision-making itself may be perceived differently by patients, peers, and the legal system (27).
One wonders how Sir William Osler, who in 1890 opined that medicine is a science of uncertainty and an art of probability, would have reacted to the introduction of AI in healthcare (28). For centuries, practicing medicine, involved acquiring as much data about the patient’s health or disease as possible and taking decisions. Wisdom presupposed experience, judgement, and problem-solving skills using rudimentary tools and limited resources. AI will and should never ever replace a commiserating clinician. Hopefully, the AI-enabled clinician will now spend more time empathising with his patient rather than getting drowned in voluminous data. He will no longer be spending time extracting meaningful data. He will spend time productively managing data extracted by AI.
The author thanks Ms. Lakshmi for providing secretarial assistance.