google ads

Patient Involvement and Empowerment in AI: Clinical Dimensions

Patient Involvement and Empowerment in AI: Clinical Dimensions

Elliot K. Fishman MD FACR
Elliot K. Fishman MD Professorship in Radiology
Professor of Radiology, Surgery, Oncology and Urology
Johns Hopkins Hospital, Baltimore, Md USA

 

AI and Reality Testing

  • AI will improve the care of our patients with earlier more accurate diagnosis
  • AI will improve the care of our patients with earlier more accurate diagnosis
  • AI will increase physician workload so they can spend more time with their patients
  • AI will change Medicine as we know it

 

AI in Clinical Trials

  • Decision support in trial design
  • Patient identification, recruitment and retention
  • Outcome monitoring
  • Side effect monitoring
  • Decrease patient dropout from trials

 

“Artificial Intelligence (AI) is set to transform medical imaging by leveraging the vast data contained in medical images. Deep learning and radiomics are the two main AI methods currently being applied within radiology. Deep learning uses a layered set of self-correcting algorithms to develop a mathematical model that best fits the data. Radiomics converts imaging data into mineable features such as signal intensity, shape, texture, and higher-order features. Both methods have the potential to improve disease detection, characterization, and prognostication.”
A primer on artificial intelligence in pancreatic imaging.
Ahmed TM, Kawamoto S, Hruban RH, Fishman EK, Soyer P, Chu LC.
Diagn Interv Imaging. 2023 Mar 24:S2211-5684(23)00050-5.

 

A primer on artificial intelligence in pancreatic imaging.
Ahmed TM, Kawamoto S, Hruban RH, Fishman EK, Soyer P, Chu LC.
Diagn Interv Imaging. 2023 Mar 24:S2211-5684(23)00050-5.

Patient Involvement in AI

 

 Purpose: To evaluate the sensitivity of artificial intelligence (AI)-powered software in detecting liver metastases, especially those overlooked by radiologists.   
Results: The software successfully processed images from 135 patients. The per-lesion sensitivity for all liver lesion types, liver metastases, and liver metastases overlooked by radiologists was 70.1%, 70.8%, and 55.0%, respectively. The software detected liver metastases in 92.7% and 53.7% of patients in detected and overlooked cases, respectively. The average number of false positives was 0.48 per patient.
Conclusion: The AI-powered software detected more than half of liver metastases overlooked by radiologists while maintaining a relatively low number of false positives. Our results suggest the potential of AI-powered software in reducing the frequency of overlooked liver metastases when used in conjunction with the radiologists’ clinical interpretation.   
Artificial intelligence-powered software detected more than half of the liver metastases overlooked by radiologists on contrast-enhanced CT
Hirotsugu Nakai et al.
European Journal of Radiology 163 (2023) 110823

 

“The AI-powered software detected more than half of liver metastases overlooked by radiologists while maintaining a relatively low number of false positives. Our results suggest the potential of AI-powered software in reducing the frequency of overlooked liver metastases when used in conjunction with the radiologists’ clinical interpretation. ”  
Artificial intelligence-powered software detected more than half of the liver metastases overlooked by radiologists on contrast-enhanced CT
Hirotsugu Nakai et al.
European Journal of Radiology 163 (2023) 110823

 

“AI has significant potential to improve patient safety. However, given the lack of rigorous evaluation of AI in actual current practice, coupled with its surprisingly broad use, we believe thetime has come to create a national agenda for a critical evaluation of AI and patient safety. This critical evaluation needs to determine among other things whether the broad current adoption of AI in health systems has actually improved patient safety.”
Bending the patient safety curve: how much can AI help?
David C. Classen , Christopher Longhurst and Eric J. Thomas
Digital Medicine (2023) 6:2 ; https://doi.org/10.1038/s41746-022-00731-5

 

“There are few rigorous assessments of actual AI deployments in health care delivery systems, and while there is some limited evidence for improved safety processes or outcomes when these AI tools are deployed, there is also evidence that these systems can increase risk if the algorithms are tuned to give overly confident results. For example, within AI risk prediction models, the sizeable literature on model development and validation is in stark contrast to the scant data describing successful clinical deployment and impact of those models in health care settings. One study revealed significant problems with one vendor’s EHR sepsis prediction algorithm, which has been very widely deployed among many health systems without any rigorous evaluation.”
Bending the patient safety curve: how much can AI help?
David C. Classen , Christopher Longhurst and Eric J. Thomas
Digital Medicine (2023) 6:2 ; https://doi.org/10.1038/s41746-022-00731-5

 

“One of the health systems that uses this commercial EHR sepsis prediction program performed an evaluation of this program in its own health system. The results were unexpected: the EHR vendor predictive program only picked up 7% of 2552 patients with sepsis who were not treated with antibiotics in a timely fashion and failed to identify 1709 patients with sepsis that the hospital did identify. Obviously, this AI sepsis prediction algorithm was not subjected to rigorous external evaluation but nevertheless was broadly adopted because the EHR vendor implemented it in its EHR package and thus made it conveniently available for its large install base of hospitals. No published evaluation on the impact of this proprietary EHR AI program on patients beyond this hospital has emerged and the impacts both positive and negative that it may have caused in its broad hospital use is unknown.”
Bending the patient safety curve: how much can AI help?
David C. Classen , Christopher Longhurst and Eric J. Thomas
Digital Medicine (2023) 6:2 ; https://doi.org/10.1038/s41746-022-00731-5

 

“Clinical applications of artificial intelligence (AI) in healthcare, including in the field of oncology, have the potential to advance diagnosis and treatment. The literature suggests that patient values should be considered in decision making when using AI in clinical care; however, there is a lack of practical guidance for clinicians on how to approach these conversations and incorporate patient values into clinical decision making. We provide a practical, values-based guide for clinicians to assist in critical reflection and the incorporation of patient values into shared decision making when deciding to use AI in clinical care.”
The Use of Artificial Intelligence in Clinical Care: A Values-Based Guide for Shared Decision Making
Rosanna Macri and Shannon L. Roberts
Curr. Oncol. 2023, 30, 2178–2186

 

“Values that are relevant to patients, identified in the literature, include trust, privacy and confidentiality, non-maleficence, safety, accountability, beneficence, autonomy, transparency, compassion, equity, justice, and fairness. The guide offers questions for clinicians to consider when adopting the potential use of AI in their practice; explores illness understanding between the patient and clinician; encourages open dialogue of patient values; reviews all clinically appropriate options; and makes a shared decision of what option best meets the patient’s values. The guide can be used for diverse clinical applications of AI.”
The Use of Artificial Intelligence in Clinical Care: A Values-Based Guide for Shared Decision Making
Rosanna Macri and Shannon L. Roberts
Curr. Oncol. 2023, 30, 2178–2186

 

“Shared decision making is an important part of patient-centered care that contributes to a positive therapeutic relationship by respecting patient autonomy and dignity through empowering patients to actively engage in treatment decisions. The goal is for a clinician to partner with a patient to identify the best option based on the patient’s values. During a shared decision-making conversation, the clinician provides the patient with information to build an accurate illness understanding. The patient is then asked to consider what is most important to them in relation to their health and share their values, beliefs, and overall life goals, why they are important, and how they apply to quality of life. Taking this into consideration, the clinician then offers the patient different options and informs them about the risks and benefits based on the best available evidence.”
The Use of Artificial Intelligence in Clinical Care: A Values-Based Guide for Shared Decision Making
Rosanna Macri and Shannon L. Roberts
Curr. Oncol. 2023, 30, 2178–2186

 

The guide that we are suggesting will use a similar format and ask clinicians, as well as potentially even further upstream AI developers, to consider certain questions to ensure that predominant patient values associated with the use of AI in clinical care are respected prior to and throughout the shared decision-making process. This will help clinicians to carry out the following:
1. Ensure that they have considered the information that the patient may identify as important or relevant to them in the use of a particular technology in their clinical care.
2. Have an opportunity to explore patient-specific values associated with the implementation of AI in their care.
3. Work with the patient to apply their values to their clinical decision making.
The Use of Artificial Intelligence in Clinical Care: A Values-Based Guide for Shared Decision Making
Rosanna Macri and Shannon L. Roberts
Curr. Oncol. 2023, 30, 2178–2186

 

“Artificial intelligence (AI) is an area of enormous interest that is transforming health care and biomedical research. AI systems have shown the potential to support patients, clinicians, and health-care infrastructure. AI systems could provide rapid and accurate image interpretation, disease diagnosis and prognosis, improved workflow, reduced medical errors, and lead to more efficient and accessible care. Incorporation of patient-reported outcome measures (PROMs), could advance AI systems by helping to incorporate the patient voice alongside clinical data.” 
Embedding patient-reported outcomes at the heart of artificial intelligence health-care technologies
Samantha Cruz Rivera et al. 
Lancet Digit Health 2023; 5: e168–73 

 

“The use of AI in healthcare involves not only technical issues but also ethical, psychocognitive, and social-demographic considerations of presenting patients with cancer with the presence of AI at the time of the diagnosis. Trust, Accountability, Personal interaction, Efficiency, and General attitude toward AI were identified as five core areas by Ongena et al. The variables that merge such aspects of patients’ attitudes to using and communicating diagnosis with AI are education and knowledge. Accordingly, the authors showed that participants who have lower education are less supportive of AI, and those who have thought AI to be less efficient have a more negative attitude toward AI. Therefore, it is possible to consider that those who do not have a good understanding of the way AI works tend to have a negative attitude toward its effectiveness and less trust in its potential.”
The Use of Artificial Intelligence (AI) in the Radiology Field: What Is the State of Doctor-Patient Communication in Cancer Diagnosis?
Derevianko A et al.
Cancers (Basel). 2023 Jan 12;15(2):470.

 

“Communication can be seen as a pivotal ingredient in medical care, and XAI might provide a patient-friendly explanation of biomedical decisions based on ML. Particularly, XAI would be highly valuable in the oncology field, where it is essential to consider not only the purely medical aspects but also the patient’s psychological and emotional dimensions. Technological aspects of AI systems are largely described by the current literature in different health sectors. However, the patient’s standpoint of AI to make decisions on their health is often neglected. Scarce communication between patients and clinicians about the potential benefits of AI is likely to cause to patients’ mistrust of such a promising tool.”
The Use of Artificial Intelligence (AI) in the Radiology Field: What Is the State of Doctor-Patient Communication in Cancer Diagnosis?
Derevianko A et al.
Cancers (Basel). 2023 Jan 12;15(2):470.

 

“In conclusion, doctors should sharpen their communication skills when AI is involved in diagnosis, and patients should be engaged in the process mainly by being informed on the functioning of medical tools used to formulate their diagnosis. One of the most evident elements from the retrieved studies is that patients do not know what AI is and this lack of knowledge affects trust and doctor–patient communication. Since patients should be empowered and tailor informed at all phases of their clinical journey, they should ideally know which diagnostic tools are used by their clinicians and the way they work. Given the outstanding AI’s potential, we believe that informing patients about its progress in our field will help them to be more trusting towards it.”
The Use of Artificial Intelligence (AI) in the Radiology Field: What Is the State of Doctor-Patient Communication in Cancer Diagnosis?
Derevianko A et al.
Cancers (Basel). 2023 Jan 12;15(2):470.

 

Clinical Trials: AI Advantages

  • AI-enabled study design could help optimize and accelerate the creation of patient-centric designs.
  • AI is driving more innovative ways of collecting clinical-trial data and reducing reliance on in-person trial sites. For example, by capturing data from body sensors and wearable devices such as bracelets, heart monitors, patches, and sensor-enabled clothing, researchers can monitor a patient’s vital signs and other information remotely and less invasively.

 

Clinical Trials: AI Advantages

  • Coupling AI with robotic process automation can harmonize and link data across different modalities of data collection.
  • Machine learning applied to clinical data could help illuminate complex relationships between different data domains—and enable automated data management.
  • Auto-generating content (using natural language generation) for trial artifact creation can streamline and accelerate the regulatory document-authoring process.
Deloitee 2022

 

AI in Healthcare: The Patient

  • About six in 10 U.S. adults said they would feel uncomfortable if their provider used artificial intelligence tools to diagnose them and recommend treatments in a care setting, according to a survey from the Pew Research Center.
  • Some 38% of respondents said using AI in healthcare settings would lead to better health outcomes while 33% said it would make them worse, and 27% said it wouldn’t make much of a difference, the survey found.
  • Ultimately, men, younger people and those with higher education levels were the most open to their providers using AI.

 

AI in Healthcare: The Patient

  • While public opinion on AI is still evolving, knowledge about the technology also determined patients’ hesitance levels.
  • Patients who said they have heard little or nothing about AI were more likely to be uncomfortable with their provider using them than those who said they had heard about them, the survey found.
  • Ultimately 75% of respondents said they are worried their providers are moving too fast implementing the tools without fully knowing the risks, compared to just 23% who said they are moving too slow

 

“Regarding being informed if AI played a big role in their diagnosis or treatment, 66%of respondents deemed it very important and 29.8% stated it was somewhat important. Thirty-one percent of respondents reported being very uncomfortable and 40.5%were somewhat uncomfortable with receiving a diagnosis from an AI algorithm that was accurate 90% of the time but incapable of explaining its rationale. Responses were similar by age and race and ethnicity. Compared with respondents who shared their views about the potential implications of AI for health care, more respondents who answered with “don’t know” deemed it very important to be told when AI played a small role in their diagnosis or treatment (59.7% vs 42.3%) and were very uncomfortable with receiving an AI diagnosis that was accurate 98% of the time but could not be explained (26.7% vs 18.8%).”
Perspectives of Patients About Artificial Intelligence in Health Care
Dhruv Khullar et al.
JAMA Network Open. 2022;5(5):e2210309.

 

“Comfort with AI varied by clinical application . For example, 12.3% of respondents were very comfortable and 42.7% were somewhat comfortable with AI reading chest radiographs, but only 6.0% were very comfortable and 25.2%were somewhat comfortable about AI making cancer diagnoses. Most respondents were very concerned or somewhat concerned about AI’s unintended consequences, including misdiagnosis (91.5%), privacy breaches (70.8%), less time with clinicians (69.6%), and higher health care costs (68.4%). A higher proportion of respondents who self identified as being members of racial and ethnic minority groups indicated being very concerned about these issues, compared with White respondents.”
Perspectives of Patients About Artificial Intelligence in Health Care
Dhruv Khullar et al.
JAMA Network Open. 2022;5(5):e2210309.

 

“Clinicians, policy makers, and developers should be aware of patients’ views regarding AI. Patients may benefit from education on how AI is being incorporated into care and the extent to which clinicians rely on AI to assist with decision-making. Future work should examine how views evolve as patients become more familiar with AI.”
Perspectives of Patients About Artificial Intelligence in Health Care
Dhruv Khullar et al.
JAMA Network Open. 2022;5(5):e2210309.

 

“AI technologies have been widely applied to medicine and healthcare. However, with the increasing complexity of AI technologies, the related applications become more and more difficult to explain and communicate. To solve this problem, the concept of XAI came into being. XAI not only can make an AI application more transparent, but also can assist in the improvement of the AI application. However, at present, common XAI tools and technologies require background knowledge from various fields and are not tailored to a specific AI application. As a result, the explainability of the AI application may be far from satisfactory.”
An improved explainable artificial intelligence tool in healthcare for hospital recommendation
Yu-Cheng Wang, Tin-Chih Toly Chen, Min-Chi Chiu
Healthcare Analytics 3 (2023) 100147

 

“We firmly believe that the introduction of AI and machine learning in medicine has helped health professionals improve the quality of care that they can deliver and has the promise to improve it even more in the near future and beyond. Just as computer acquisition of radiographic images did away with the x-ray file room and lost images, AI and machine learning can transform medicine. Health professionals will figure out how to work with AI and machine learning as we grow along with the technology. AI and machine learning will not put health professionals out of business; rather, they will make it possible for health professionals to do their jobs better and leave time for the human–human interactions that make medicine the rewarding profession we all value.”
Artificial Intelligence and Machine Learning in Clinical Medicine, 2023 
Charlotte J. Haug, Jeffrey M. Drazen
N Engl J Med 2023;388:1201-8. 

 

Privacy Policy

Copyright © 2024 The Johns Hopkins University, The Johns Hopkins Hospital, and The Johns Hopkins Health System Corporation. All rights reserved.