Imaging Pearls ❯ Deep Learning ❯ AI and Legal Issues
-- OR -- |
|
- “Most private and community radiology practices have a good working relationship with their hospitals but are financially independent. This dichotomy makes a hybrid model between the health system and the radiologists most likely to be effective. Well-defined governance structures for AI development, purchase, and implementation in private and community practice are less prevalent than in academic practices. However, as adoption of AI in the community becomes more widespread, structured AI oversight within these radiology practices will be equally important. Results of the American College of Radiology 2019 radiologist workforce survey demonstrated less than 17% of radiology group practices are part of academic university practices, with the majority of the remaining practices falling into the categories of private practice (47%), multispecialty clinic (12%), and hospital-based practice and corporate practice (4%).”
Implementation of Clinical Artificial Intelligence in Radiology: Who Decides and How?
Dania Daye et al.
Radiology 2022; 000:1–9
- “Public databases are an important resource for machine learning research, but their growing availability sometimes leads to “off-label” usage, where data published for one task are used for another. This work reveals that such off-label usage could lead to biased, overly optimistic results of machine-learning algorithms. The underlying cause is that public data are processed with hidden processing pipelines that alter the data features. Here we study three well-known algorithms developed for image reconstruction from magnetic resonance imaging measurements and show they could produce biased results with up to 48% artificial improvement when applied to public databases. We relate to the publication of such results as implicit “data crimes” to raise community awareness of this growing big data problem.”
Implicit data crimes: Machine learning bias arising from misuse of public data
Efrat Shimrona et al.
PNAS 2022 Vol. 119 No. 13 e2117203119 - “In summary, this research aims to raise a red flag regarding naive off-label usage of open-access data in the development of machine-learning algorithms. We showed that such usage may lead to biased results of inverse problem solvers. Furthermore, we demonstrated that training MRI reconstruction algorithms using such data could yield an overly optimistic evaluation of their abil- ity to reconstruct small, clinically relevant details and pathology. This increases the risk of translation of biased algorithms into clinical practice. Therefore, we call for attention of researchers and reviewers: Data usage and pipeline adequacy should be consid- ered carefully, reproducible research should be encouraged, and research transparency should be required. Through this work, we hope to raise community awareness, stimulate discussions, and set the ground for future studies of data usage.”
Implicit data crimes: Machine learning bias arising from misuse of public data
Efrat Shimrona et al.
PNAS 2022 Vol. 119 No. 13 e2117203119
- AI and Liability
- Who is responsible for the accuracy of an AI system when it makes an error?
- What is the liability of the Radiologist when using AI?
- What is the liability of the health system that purchases an AI product? - “Developers of health care AI products face the risk of product liability lawsuits when their products injure patients, whether injuries arise from defective manufacturing, defective design, or failure to warn users about mitigable dangers.16 Physicians may also face risks from patient injuries stemming from the use of AI, including faulty recommendations or inadequate monitoring. Similarly, hospitals or health systems may face liability as coordinating providers of health care or on the basis of inadequate care in supplying AI tools — an analogy to familiar forms of medical liability for providing inadequate facilities or negligently credentialing a physician practicing at the hospital. Such risks may reduce incentives to adopt AI tools.”
AI Insurance: How Liability Insurance Can Drive the Responsible Adoption of Artificial Intelligence in Health Care
Ariel Dora Stern et al.
NEJM Catalyst Vol. 3 No. 4 | April 2022 DOI: 10.1056/CAT.21.0242 - "AI liability insurance would reduce the liability risk to developers, physicians, and hospitals. Insurance is a tool for managing risk, allowing the insurance policy holders to benefit from pooling risk with others. Insurance providers are intermediaries that play an organizing role in creating these pools and performing actuarial assessment of associated risks. While many types of insurance exist in the health care context, our focus in this article is entirely on AI liability insurance rather than coverage for health care services.”
AI Insurance: How Liability Insurance Can Drive the Responsible Adoption of Artificial Intelligence in Health Care
Ariel Dora Stern et al.
NEJM Catalyst Vol. 3 No. 4 | April 2022 DOI: 10.1056/CAT.21.0242 - "The credentialing function of insurance will thus reinforce the patient-centered incentives of AI developers Consequently, this insurance may alleviate health care provider concerns, at least to the point at which they are willing to adopt the AI technology. Indeed, this should be the case regardless of whether the AI manufacturer or the health care provider is the holder of the insurance policy, as long as such a policy can be purchased. However, the price and implicit value of insurance are likely to be passed through. For example, a manufacturer selling an AI tool that comes with liability insurance will be able to command a higher price than for the same tool without such insurance. Insurers may also require ongoing performance data from AI developers, whether they are in house or commercial; such data could be well beyond those needed to meet the requirements of regulatory premarket review.28 While insurers do not provide the same level of centralized review that regulators do, they may well serve a more context-sensitive, hands-on evaluative role focused on both quantifying and reducing risk — a role that may be especially important given the questionable generalizability of many current-generation AI systems.”
AI Insurance: How Liability Insurance Can Drive the Responsible Adoption of Artificial Intelligence in Health Care
Ariel Dora Stern et al.
NEJM Catalyst Vol. 3 No. 4 | April 2022 DOI: 10.1056/CAT.21.0242
- “Proponents of artificial intelligence (“AI”) technology have suggested that in the near future, AI software may replace human radiologists. While AI’s assimilation into the specialty has occurred more slowly than predicted, developments in machine learning, deep learning, and neural networks suggest that technological hurdles and costs will eventually be overcome. However, beyond these technological hurdles, formidable legal hurdles threaten AI’s impact on the specialty. Legal liability for errors committed by AI will influence AI’s ultimate role within radiology and whether AI remains a simple decision support tool or develops into an autonomous member of the healthcare team.”
Is Artificial Intelligence (AI) a Pipe Dream? Why Legal Issues Present Significant Hurdles to AI Autonomy
Jonathan L. Mezrich, MD, JD, LLM, MBA
https://doi.org/10.2214/AJR.21.27224 - “Additional areas of uncertainty include the potential application of products liability law to AI, and the approach taken by the U.S. FDA in potentially classifying autonomous AI as a medical device. The current ambiguity of the legal treatment of AI will profoundly impact autonomous AI development given that vendors, radiologists, and hospitals will be unable to reliably assess their liability from implementing such tools. Advocates of AI in radiology and health care in general should lobby for legislative action to better clarify the liability risks of AI in a way that does not deter technological development.”
Is Artificial Intelligence (AI) a Pipe Dream? Why Legal Issues Present Significant Hurdles to AI Autonomy
Jonathan L. Mezrich, MD, JD, LLM, MBA
AJR 2022 Feb 9 [published online]. Accepted manuscript. doi:10.2214/AJR.21.27224 - "Duplicating radiologists’ abilities through technology has proven more of a challenge than originally posited, with resultant skepticism regarding AI’s ultimate impact on the field, at least for the near term. Technological hurdles and costs will fall, and it is only a matter of time until machines can offer a reasonable facsimile of the radiologist report. However, even beyond these technological hurdles, formidable legal obstacles, often not given enough attention in the literature, threaten AI’s impact on the specialty and, if unchanged, have the potential to preclude the future success of this emerging industry.”
Is Artificial Intelligence (AI) a Pipe Dream? Why Legal Issues Present Significant Hurdles to AI Autonomy
Jonathan L. Mezrich, MD, JD, LLM, MBA
AJR 2022 Feb 9 [published online]. Accepted manuscript. doi:10.2214/AJR.21.27224 - “Fundamentally, the legal handling of AI will hinge on the degree of autonomy exercised by the AI software. If the primary use of AI is simply as a decision support tool to highlight findings for the radiologist, who thereafter makes the final determinations and issues a report, the issues are quite simple. The radiologist who makes the final determination bears the liability risk.”
Is Artificial Intelligence (AI) a Pipe Dream? Why Legal Issues Present Significant Hurdles to AI Autonomy
Jonathan L. Mezrich, MD, JD, LLM, MBA
AJR 2022 Feb 9 [published online]. Accepted manuscript. doi:10.2214/AJR.21.27224 - "A radiologist breaches this duty when the expected standard of care is not met. The standard of care is the degree of care that a “reasonably prudent radiologist” would be expected to exercise under the same or similar circumstances. The issue of liability is one of reasonableness: what would a reasonably prudent radiologist do in this situation? This standard of care will largely be established in the context of the courtroom using expert witness testimony, whereby other radiologists opine as to what, in their professional opinion, would be a reasonable action in this situation.”
Is Artificial Intelligence (AI) a Pipe Dream? Why Legal Issues Present Significant Hurdles to AI Autonomy
Jonathan L. Mezrich, MD, JD, LLM, MBA
AJR 2022 Feb 9 [published online]. Accepted manuscript. doi:10.2214/AJR.21.27224 - "But how does medical malpractice even work in the setting of an autonomous algorithm? Is there a similar physician-patient relationship when the “physician” is an algorithm? How is an AI algorithm held to the “reasonably prudent radiologist” (or perhaps “reasonably prudent algorithm”) standard, and who could serve as expert witness to determine this standard? Is there a different standard of care or expectation for an algorithm, and does the expectation change if the algorithm is performing tasks that go beyond the capabilities of the typical human radiologist (e.g., predicting optimum therapy options or responses based on imaging or genomic lesion characterization)? Ultimately, the facility hosting the AI likely would bear liability, and malpractice principles would no longer be applicable or even defensible; the circumstance would essentially become a form of “enterprise” liability.”
Is Artificial Intelligence (AI) a Pipe Dream? Why Legal Issues Present Significant Hurdles to AI Autonomy
Jonathan L. Mezrich, MD, JD, LLM, MBA
AJR 2022 Feb 9 [published online]. Accepted manuscript. doi:10.2214/AJR.21.27224 - "An injured patient tends to be a sympathetic witness in the eyes of a jury, whereas an AI algorithm would be unsympathetic; faceless emotionless robots make for very bad defendants. A skilled plaintiff’s attorney would elicit a mental image of machines running amok, including cold passionless robots making life and death judgments; jurors, inclined to fear technology from a lifetime of science fiction dystopias, would likely “throw the book” at the defendant. The idea that a medical center would replace a caring and compassionate doctor with a robot such as HAL 9000 to maximize revenue would not play well to a jury.”
Is Artificial Intelligence (AI) a Pipe Dream? Why Legal Issues Present Significant Hurdles to AI Autonomy
Jonathan L. Mezrich, MD, JD, LLM, MBA
AJR 2022 Feb 9 [published online]. Accepted manuscript. doi:10.2214/AJR.21.27224 - “Debate is ongoing regarding the appropriate integration of AI tools with human decision- makers (including non-radiologists), the risks of ignoring AI outputs as AI use becomes he standard of care, and potential issues in overreliance on AI tools that may be relevant to liability. AI law remains in its early stages, and ongoing uncertainty is present regarding the manner in which courts will allocate liability for AI mistakes in radiology and the impact that such costs may have on AI development. Proponents of AI should recognize the legal system’s complexities and hurdles.”
Is Artificial Intelligence (AI) a Pipe Dream? Why Legal Issues Present Significant Hurdles to AI Autonomy
Jonathan L. Mezrich, MD, JD, LLM, MBA
AJR 2022 Feb 9 [published online]. Accepted manuscript. doi:10.2214/AJR.21.27224 - "AI is undergoing rapid integration into radiology practice, driven by the appeal of improvements in diagnostic accuracy, cost effectiveness, and savings. While the legal implications of simple applications of AI as a radiology tool are overall straightforward, the legal ramifications of greater AI autonomy are thus far incompletely delineated. Current technological hurdles to the integration of advanced AI solutions into radiology practice will gradually be overcome. However the accompanying legal hurdles and complexities are substantial and, depending on how they are handled, could lead to untapped technological potential.”
Is Artificial Intelligence (AI) a Pipe Dream? Why Legal Issues Present Significant Hurdles to AI Autonomy
Jonathan L. Mezrich, MD, JD, LLM, MBA
AJR 2022 Feb 9 [published online]. Accepted manuscript. doi:10.2214/AJR.21.27224
- "The FDA has recently approved software by AIDoc Medical (Tel Aviv, Israel) as well as Zebra Medical Vision (Shefayim, Israel) that automatically detects pulmonary embolisms in chest CTs. As described by Weikert et al., the work by AIDoc was based on a compiled dataset of 1499 CT pulmonary angiograms with corresponding reports that was then tested on four trained prototype algorithms. The algorithm that achieved optimal results was shown to have a sensitivity of 93% and a specificity of 95%, with a positive predictive value of 77%.”
The first use of artificial intelligence (AI) in the ER: triage not diagnosis
Edmund M. Weisberg, Linda C. Chu, Elliot K. Fishman
Emergency Radiology (2020) 27:361–366 - "Cerebral hemorrhage is a key category of emergent diagnoses in which AI is making inroads. The FDA has approved AI software applications by AIDoc, Zebra Medical, and MaxQ designed to detect intracranial bleeds. The initial goal of the software is to improve workflow for radiologists (and our patients), and facilitate the triage process to improve the chances that cases with bleeds are read earlier in radiologic review. Supporting the phrase “time is brain,” this is an ideal use of AI and deep learning.”
The first use of artificial intelligence (AI) in the ER: triage not diagnosis
Edmund M. Weisberg, Linda C. Chu, Elliot K. Fishman
Emergency Radiology (2020) 27:361–366 - "Acknowledging the effects of high imaging volumes on wait times for radiograph reviews, Taylor et al. conducted a large retrospective study to annotate a substantial dataset of pneumothorax-containing chest X-rays (ultimately, 13,292 frontal chest X-rays, 3107 of which included pneumothorax) to use to train deep CNNs to evaluate for possible emergent pneumothorax upon acquisition of the image. The investigators succeeded in developing models that can yield high- specificity screening of moderate or large pneumothoraces in cases where human review may be affected by scheduling, but the algorithm notably fails to detect small and some larger pneumothoraces.”
The first use of artificial intelligence (AI) in the ER: triage not diagnosis
Edmund M. Weisberg, Linda C. Chu, Elliot K. Fishman
Emergency Radiology (2020) 27:361–366 - "The ability to triage patients and take care of acute processes such as intracranial bleed, pneumothorax, and pulmonary embolism will largely benefit the health system, improving patient care and reducing costs. In the end, our mission is the care of our patients, and if AI can improve it, we will need to adopt it with open arms.”
The first use of artificial intelligence (AI) in the ER: triage not diagnosis
Edmund M. Weisberg, Linda C. Chu, Elliot K. Fishman
Emergency Radiology (2020) 27:361–366
- Rationale and Objectives: Generative adversarial networks (GANs) are deep learning models aimed at generating fake realistic looking images. These novel models made a great impact on the computer vision field. Our study aims to review the literature on GANs applications in radiology.
Conclusion: GANs are increasingly studied for various radiology applications. They enable the creation of new data, which can be used to improve clinical care, education and research.”
Creating Artificial Images for Radiology Applications Using Generative Adversarial Networks (GANs) – A Systematic Review
Vera Sorin et al.
Acad Radiol 2020 (in press) - “Generative adversarial networks (GANs) are a more recent deep learning development, invented by Ian Goodfellow and colleagues. GAN is a type of deep learning model that is aimed at generating new images. GANs are now at the center of public attention due to “deepfake” digital media manipulations. This technique uses GANs to generate artificial images of humans. As an example, this webpage uses GAN to create random fake pictures of non-existent people.”
Creating Artificial Images for Radiology Applications Using Generative Adversarial Networks (GANs) – A Systematic Review
Vera Sorin et al.
Acad Radiol 2020 (in press) - "Deep learning can improve diagnostic imaging tasks in radiology enabling segmentation of images, improvement of image quality, classification of images, detection of findings, and prioritization of examinations according to urgent diagnoses. Successful training of deep learning algorithms requires large-scale data sets. However, the difficulty of obtaining sufficient data limits the development and implementation of deep learning algorithms in radiology. GANs can help to overcome this obstacle. As dem- onstrated in this review, several studies have successfully trained deep learning algorithms using augmented data generated by GANs. Data augmentation with generated images significantly improved the performance of CNN algorithms. Furthermore, using GANs can reduce the amount of clinical data needed for training. The increasing research focus on GANs can therefore impact successful automatic image analysis in radiology.”
Creating Artificial Images for Radiology Applications Using Generative Adversarial Networks (GANs) – A Systematic Review
Vera Sorin et al.
Acad Radiol 2020 (in press) - "Some risks are involved with the development of GANs. In a recent publication Mirski et al. warn against hacking of imaging examinations, artificially adding or removing medical conditions from patient scans. Also, using generated images in clinical practice should be done with caution, as the algorithms are not without limitations. For example, in image reconstruction details can get lost at translation, while fake inexistent details can suddenly appear.”
Creating Artificial Images for Radiology Applications Using Generative Adversarial Networks (GANs) – A Systematic Review
Vera Sorin et al.
Acad Radiol 2020 (in press)
- “The medico-legal issue that then arises is the question of “who is responsible for the diagnosis,” especially if it is wrong. Whether data scientists or manufacturers involved in development, marketing, and installation of AI systems will carry the ultimate legal responsibility for adverse outcomes arising from AI algorithm use is a dif- ficult legal question; if doctors are no longer the primary agents of interpretation of radiological studies, will they still be held accountable?”
What the radiologist should know about artificial intelligence – an ESR white paper
Insights into Imaging (2019) 10:44 https://doi.org/10.1186/s13244-019-0738-2 - ”If radiologists monitor AI system outputs and still have a role in validating AI interpretations, do they still carry the ultimate responsibility, even though they do not understand, and cannot interrogate the precise means by which a diagnosis was determined? This “black box” element of AI poses many challenges, not least to the basic human need to under- stand how and why important decisions were made."
What the radiologist should know about artificial intelligence – an ESR white paper
Insights into Imaging (2019) 10:44 https://doi.org/10.1186/s13244-019-0738-2 - “Furthermore, if patient data are used to build AI products which go on to generate profit, consideration needs to be given to the issue of intellectual property rights. Do the involved patients and the collecting organizations have a right to share in the profits that derive from their data?”
What the radiologist should know about artificial intelligence – an ESR white paper
Insights into Imaging (2019) 10:44 https://doi.org/10.1186/s13244-019-0738-2 - “Fundamentally, each patient whose data is used by a third party should pro- vide consent for that use, and that consent may need to be obtained afresh if the data is re-used in a different context (e.g., to train an updated software version). Moreover, ownership of imaging datasets varies from one jurisdiction to another. In many countries, the ultimate ownership of such personal data resides with the patient, although the data may be stored, with consent, in a hospital or imaging centre repository.
What the radiologist should know about artificial intelligence – an ESR white paper
Insights into Imaging (2019) 10:44 https://doi.org/10.1186/s13244-019-0738-2 - The real challenge is not to oppose the incorporation of AI into the professional lives (a futile effort) but to embrace the inevitable change of radiological practice, incorporating AI in the radiological workflow. The most likely danger is that “[w]e’ll do what computers tell us to do, because we’re awestruck by them and trust them to make important decisions”
What the radiologist should know about artificial intelligence – an ESR white paper
Insights into Imaging (2019) 10:44 https://doi.org/10.1186/s13244-019-0738-2