google ads
Search

Everything you need to know about Computed Tomography (CT) & CT Scanning

Deep Learning: Ai and Patient Care Imaging Pearls - Educational Tools | CT Scanning | CT Imaging | CT Scan Protocols - CTisus
Imaging Pearls ❯ Deep Learning ❯ AI and Patient Care

-- OR --

  • AI and our Patients
  • “Most respondents had positive views about AI’s ability to improve care but had concerns about its potential for misdiagnosis, privacy breaches, reducing time with clinicians, and increasing costs, with racial and ethnic minority groups expressing greater concern. Respondents were more comfortable with AI in specific clinical settings, and most wanted to know when AI was used in their care. One limitation of this study was it involved a panel that had agreed to participate in surveys, which may limit generalizability. In addition, compared with nonrespondents, respondents were younger, but no significant differences by sex or race and ethnicity were found. Clinicians, policy makers, and developers should be aware of patients’ views regarding AI. Patients may benefit from education on how AI is being incorporated into care and the extent to which clinicians rely on AI to assist with decision-making. Future work should examine how views evolve as patients become more familiar with AI.”
    Perspectives of Patients About Artificial Intelligence in Health Care.
    Khullar D et al.
    JAMA Netw Open. 2022 May 2;5(5):e2210309
  • “The growing use of artificial intelligence (AI) in health care has raised questions about who should be held liable for medical errors that result from care delivered jointly by physicians and algorithms. In this survey study comparing views of physicians and the U.S. public, we find that the public is significantly more likely to believe that physicians should be held responsible when an error occurs during care delivered with medical AI, though the majority of both physicians and the public hold this view (66.0% vs 57.3%; P = .020). Physicians are more likely than the public to believe that vendors (43.8% vs 32.9%; P = .004) and healthcare organizations should be liable for AI-related medical errors (29.2% vs 22.6%; P = .05). Views of medical liability did not differ by clinical specialty. Among the general public, younger people are more likely to hold nearly all parties liable.”
    Public vs physician views of liability for artificial intelligence in health care.  
    Khullar D et al.  
    J Am Med Inform Assoc. 2021 Jul 14;28(7):1574-1577
  • “Moreover, patients must be properly informed about the relevant concepts. Many patients are unfamiliar with the concept of over-diagnosis and therefore may be unable to weigh the relative risk of unnecessary diagnosis and treatment against the risk failing to discover a cancer. Moreover, patients may not always have preferences about such outcomes. There must still be a sensible default decision threshold that can be used in cases in which patients choose to withhold their attitudes or simply have no preferences.”
    Clinical decisions using AI must consider patient values  
    Jonathan Birch, Kathleen A. Creel, Abhinav K.  
    Nature Medicine | VOL 28 | Feb 2022 | 226–235 
  • “A risk-profiling questionnaire suitable for cancer screening would probe the patient’s attitudes about the risk of over-diagnosis, false-positive and false-negative results, and over-treatment versus under-treatment, and the expected value to the patient of additional years of life of varying quality levels. The questionnaire might also ask patients to respond to statements such as ‘I would rather risk surgical complications to treat a benign tumor than risk missing a cancerous tumor’.”
    Clinical decisions using AI must consider patient values  
    Jonathan Birch, Kathleen A. Creel, Abhinav K.  
    Nature Medicine | VOL 28 | Feb 2022 | 226–235
  • “Relatively weak evidence supporting the use of AI in routine clinical practice health care settings, AI models continue to be marketed and deployed. A recent example is the Epic Sepsis Model. While this model was widely implemented in hundreds of US hospitals, a recent study showed that it performed significantly worse in correctly identifying patients with early sepsis and improving patient outcomes in a clinical setting compared with performance observed during development of the model.”
    Preparing Clinicians for a Clinical World Influenced by Artificial Intelligence  
    Cornelius A. James,  et al.
    JAMA Published online March 21, 2022
  • “AI will soon become ubiquitous in health care. Building on lessons learned as implementation strategies continue to be devised, it will be essential to consider the key role of clinicians as end users of AI-developed algorithms, processes, and risk predictors. It is imperative that clinicians have the knowledge and skills to assess and determine the appropriate application of AI outputs, for their own clinical practice and for their patients. Rather than being replaced by AI, these new technologies will create new roles and responsibilities for clinicians.”
    Preparing Clinicians for a Clinical World Influenced by Artificial Intelligence  
    Cornelius A. James,  et al.
    JAMA Published online March 21, 2022
  • “In the hospital context, Alexa could become an integral feature of patient rooms because it would allow patients to change the channel on the television, listen to prerecorded physician instructions, find out when their next dose of medication is due, receive daily briefs on what to expect, and actively engage or respond to other aspects of their hospital stay.”
    Learning to Talk Again in a Voice-First World
    David Isbitski, Elliot K. Fishman, MD, Karen M. Horton, MD, Steven P. Rowe, MD, PhD
    J Am Coll Radiol. 2019 Aug;16(8):1123-1124
  • “After discharge, Alexa can aid communication between the patient and the medical team (eg, “Ask my doctor when I should change my dressing”). Through machine learning, it may be possible for Alexa to figure out what patients want to know but are reluctant to ask and can then provide that information up front.”
    Learning to Talk Again in a Voice-First World
    David Isbitski, Elliot K. Fishman, MD, Karen M. Horton, MD, Steven P. Rowe, MD, PhD
    J Am Coll Radiol. 2019 Aug;16(8):1123-1124
  • “Specifically for radiology, home artificial intelligence devices could prove to be a valuable tool for offering instructions to pa- tients on how to prepare for imaging examinations and what to expect when they arrive at an imaging center or hospital.”
    Learning to Talk Again in a Voice-First World
    David Isbitski, Elliot K. Fishman, MD, Karen M. Horton, MD, Steven P. Rowe, MD, PhD
    J Am Coll Radiol. 2019 Aug;16(8):1123-1124
  • “Integrating Alexa, or similar platforms, into everyday workflow may free the radiologist from many otherwise time-consuming tasks. For example, integrating artificial intelligence with voice technology into the phone network of the hospital would greatly speed the process of contacting ordering clinicians to report critical findings.”
    Learning to Talk Again in a Voice-First World
    David Isbitski, Elliot K. Fishman, MD, Karen M. Horton, MD, Steven P. Rowe, MD, PhD
    J Am Coll Radiol. 2019 Aug;16(8):1123-1124
  • “He understands that every guest is always the most important person in the room, and we have instilled that ethos in all of our staff members. Interestingly, this is one area where your industry, in my experience, fails: several years ago, my father underwent major surgery, and I felt that the health care staff did not consider him to be the most important person in the room and were simply not listening to what he and my family had to say to them.”
    Stories From the Kitchen: Lessons for Radiology From the Restaurant Business
    Cindy Wolf, Elliot K. Fishman, MD, Karen M. Horton, MD, Siva P. Raman
    J Am Coll Radiol. 2015 Mar;12(3):307-8
  • “I realize that managing the customer experience will undoubtedly be harder in a big organization like yours. Never- theless, that is no excuse not to try. Hire people who care about and believe in what your organization is doing, and keep paying attention to every aspect of the customer experience.”
    Stories From the Kitchen: Lessons for Radiology From the Restaurant Business
    Cindy Wolf, Elliot K. Fishman, MD, Karen M. Horton, MD, Siva P. Raman
    J Am Coll Radiol. 2015 Mar;12(3):307-8
  • “At our institution, likely reflective of practices across the country, radiologists pay little attention to this group of employees, virtually never interact with them, and are often blind to the impor- tance of these staff members in driving patients’ perception of a practice and the ultimate economic success of a radiology group.”
    Stories From the Kitchen: Lessons for Radiology From the Restaurant Business
    Cindy Wolf, Elliot K. Fishman, MD, Karen M. Horton, MD, Siva P. Raman
    J Am Coll Radiol. 2015 Mar;12(3):307-8
  • Even patients with substantial expertise in science or particular medical problems still rely on physicians during times of stress and uncertainty, and need them to perform procedures, interpret diagnostic tests, and prescribe medications. In these situations, reciprocal trust is central to the functioning of a health system and leads to higher treatment adherence, improvements in self-reported health, and better patient experience.So the question is: as technology continues to change relationships between patients and physicians, how can patient-physician trust be maintained or even improved?
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “Prior work has examined the accuracy of AI, potential for biases, and lack of explainability (“black box”), all of which may affect physicians’ and patients’ trust in health care AI, as well as the potential for AI to replace physicians. However, in settings for which care will still be provided by a physician, whether and how AI will affect trust between physicians and patients has yet to be addressed. The potential effects of AI on trust between physicians and patients should be explicitly designed and planned for.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “When considering the implications of health care AI on trust, a broad range of health care AI applications need to be considered, including (1) use of health care AI by physicians and systems, such as for clinical decision support and system strengthening, physician assessment and training, quality improvement, clinical documentation, and nonclinical tasks, such as scheduling and notifications; (2) use of health care AI by patients including triage, diagnosis, and self-management; and (3) data for health care AI involving the routine use of patient data to develop, validate, and fine-tune health care AI as well as to personalize the output of health care AI.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “Each of these applications has the potential to enable and disable the 3 components of trust: competency, motive, and transparency.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • Competency, Motive, Transparency
  • Competency reflects both the extent to which physicians are perceived to have clinical mastery and patients’ knowledge and self-efficacy of their own health. Because much of AI is and will be used to augment the abilities of physicians, there is potential to increase physician competency and enable patient-physician trust. This includes not only AI-assisted clinical decision support (eg, by suggesting possible diagnoses to consider) but also the use of AI for physician training and quality improvement (eg, by providing automated feedback to physicians about their diagnostic performance). AI can also serve an important role in empowering patients to better understand their health and self-manage their conditions.
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “ On the other hand, trust will be compromised by AI that is inaccurate, biased, or reflective of poor-quality practices as well as AI that lacks explainability and inappropriately conflicts with physician judgment and patient autonomy.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “Motive refers to a patient’s trust that the physician is acting solely in the interests of the patient. Patients are likely to perceive motive through the lens of the extent of the open dialogue they have with their physicians. Through greater automation of low-value tasks, such as clinical documentation, it is possible that AI will free up physicians to identify patients’ goals, barriers, and beliefs, and counsel them about their decisions and choices, thereby increasing trust. Conversely, AI could automate more of the physician’s workflow, but then fill freed-up time with more patients with clinical issues that are more cognitively or emotionally complex. AI could also enable greater distribution of care across a care team (both human agents and computer agents).”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “Whether this would enhance or harm trust would depend on the degree of collaboration among team members and the information flow, and could compromise trust if robust, longitudinal relationships were impeded.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “Well-designed AI that allows patients to appreciate and understand that clinical decisions are based on evidence and expert consensus should enhance trust. It can also process patient data (including health care and consumer data) to provide physicians’ insight on patients’ behaviors and preferences.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “ Moreover, if patient data are routinely shared with external entities for AI development, patients may become less transparent about divulging their information to physicians, and physicians may be more reluctant to acknowledge their own uncertainties. AI that does not explain the source or nature of its recommendations (“black box”) may also erode trust.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “Where health care AI is implemented by health systems, it should be directed toward automating the transactional, business, and documentation aspects of care; doing so may provide time to physicians to engage with their patients more deeply. If AI is effective in relieving physicians from the burdens of data entry and other clerical tasks, much of the reclaimed time should be made available for patient care, shared decision-making, and counseling, which are the cornerstones of effective health care that are often compromised today.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “When health care AI is developed by health systems and third-party organizations using patient data, physicians should be mindful of the effect on patient-physician trust. It will be important to develop ethical approaches that allow for patient input into decisions by health systems to share data for the purposes of developing AI through some combination of individual patient consent and the involvement of patient advocacy groups.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563

Privacy Policy

© 1999-2022 Elliot K. Fishman, MD, FACR. All rights reserved.