google ads
Deep Learning: Developers of Dl and Ai Imaging Pearls - Educational Tools | CT Scanning | CT Imaging | CT Scan Protocols - CTisus
Imaging Pearls ❯ Deep Learning ❯ Developers of DL and AI

-- OR --

  • Purpose: To present a deep learning segmentation model that can automatically and robustly segment all major anatomic structures on body CT images.
    Materials and Methods: In this retrospective study, 1204 CT examinations (from 2012, 2016, and 2020) were used to segment 104 anatomic structures (27 organs, 59 bones, 10 muscles, and eight vessels) relevant for use cases such as organ volumetry, disease characterization,and surgical or radiation therapy planning. The CT images were randomly sampled from routine clinical studies and thus represent a real-world dataset (different ages, abnormalities, scanners, body parts, sequences, and sites). The authors trained an nnUNet segmentation algorithm on this dataset and calculated Dice similarity coefficients to evaluate the model’s performance. The trained algorithm was applied to a second dataset of 4004 whole-body CT examinations to investigate age-dependent volume and attenuation changes.
    TotalSegmentator: Robust Segmentation of 104 Anatomic Structures in CT Images
    Jakob Wasserthal, et al.
    Radiology: Artificial Intelligence 2023; 5(5):e230024
  • Results: The proposed model showed a high Dice score (0.943) on the test set, which included a wide range of clinical data with major abnormalities. The model significantly outperformed another publicly available segmentation model on a separate dataset (Dice score, 0.932 vs 0.871; P < .001). The aging study demonstrated significant correlations between age and volume and mean attenuation for a variety of organ groups (eg, age and aortic volume [rs = 0.64; P < .001]; age and mean attenuation of the autochthonous dorsal musculature [rs = −0.74; P < .001]).
    Conclusion: The developed model enables robust and accurate segmentation of 104 anatomic structures. The annotated dataset (https:// doi.org/10.5281/zenodo.6802613) and toolkit (https://www.github.com/wasserth/TotalSegmentator) are publicly available.
     TotalSegmentator: Robust Segmentation of 104 Anatomic Structures in CT Images
    Jakob Wasserthal, et al.
    Radiology: Artificial Intelligence 2023; 5(5):e230024
  • Key Points
    ■ The proposed model was trained on a diverse dataset of 1204 CT examinations randomly sampled from routine clinical studies; the dataset contained segmentations of 104 anatomic structures (27 organs, 59 bones, 10 muscles, and eight vessels) that are relevant for use cases such as organ volumetry, disease characterization, and surgical or radiation therapy planning.
    ■ The model achieved a high Dice similarity coefficient (0.943; 95% CI: 0.938, 0.947) on the test set encompassing a wide range of clinical data, including major abnormalities, and outperformed other publicly available segmentation models on a separate dataset (Dice score, 0.932 vs 0.871; P < .001).
    ■ Both the training dataset (https://doi.org/10.5281/zenodo.6802613) and developed model (https://www.github.com/wasserth/ TotalSegmentator) are publicly available.
    TotalSegmentator: Robust Segmentation of 104 Anatomic Structures in CT Images
    Jakob Wasserthal, et al.
    Radiology: Artificial Intelligence 2023; 5(5):e230024

  • TotalSegmentator: Robust Segmentation of 104 Anatomic Structures in CT Images
    Jakob Wasserthal, et al.
    Radiology: Artificial Intelligence 2023; 5(5):e230024
  • “One major diagnostic task of a radiologist interpreting a chest radiograph is to identify one of more than 200 findings that may be present at imaging. However, schemas are almost universally limited to about 12 classes, understandably because of impracticality of annotating hundreds of findings. As a result, binary classification models that are trained on such schemas exclude serious but uncommon findings, such as pneumomediastinum or bone metastases, which may result in potentially high-risk, false-negative results.”
    Hurdles to Artificial Intelligence Deployment: Noise in Schemas and “Gold” Labels
    Mohamed Abdalla • Benjamin Fine
    Radiology: Artificial Intelligence 2023; 5(2):e220056
  • “In summary, underreported noise in schema design and gold labels are widespread and contribute to mistrust and challenges in development, evaluation, and clinical deployment of chest radiograph AI models. Our work (a) describes the concept of schema noise—the lack of conceptual clarity regarding the task at hand, manifesting itself as class overlap (infiltrate vs consolidation), hidden hierarchy (abscess as a leaf node of cavity), and intermingling observations with disorders (consolidation vs pneumonia) and(b) characterizes the second-order variation and quantifies the magnitude of label noise.”
    Hurdles to Artificial Intelligence Deployment: Noise in Schemas and “Gold” Labels
    Mohamed Abdalla • Benjamin Fine
    Radiology: Artificial Intelligence 2023; 5(2):e220056
  • “In conclusion, GANs are a promising novel technology that can offer widespread applications for oncologists. Studies thus far have demonstrated practical applications in improving cancer screening and prognosis, radiotherapy dosing, and biomarker identification. As the technology improves, current challenges will be resolved, and more use cases will be identified. It would be wise for those in the field of oncology to pay attention to improvements in GANs for their potential in improving the quality of patient care, reducing clinicians’ workload, and enhancing oncologic research.”
    Oncological Applications of Deep Learning Generative Adversarial Networks  
    Harrison Phillips, Shelly Soffer, Eyal Klang
    JAMA Oncology May 2022 Volume 8, Number 5; 677-678
  • “In addition to synthesizing images from scratch, GANs are capable of image translation, which involves transforming the original image with features from a different domain. Classic examples include transforminga picture of a horse into a zebra and modifying the background of a city from winter to summer. Medical applications of this strategy include image translation between imaging modalities, such as synthesizing computed tomography scans from magnetic resonance imaging. The most widespread oncological use case of image translation involves the optimization of radiotherapy dosages in cancer treatment. Retrospective improvement of radiotherapy dosimetry has been demonstrated for head and neck, prostate, brain, breast, and lung cancers. The implication of adopting this approach would be that the number of necessary imaging studies conducted and the number of patients exposed to ionizing radiation would be reduced.”
    Oncological Applications of Deep Learning Generative Adversarial Networks  
    Harrison Phillips, Shelly Soffer, Eyal Klang
    JAMA Oncology May 2022 Volume 8, Number 5; 677-678
  • “Accurate medical triage is essential for improving patient outcomes and efficient healthcare delivery. Patients increasingly rely on artificial intelligence (AI)-based applications to access healthcare information, including medical triage advice. We assessed the accuracy of triage decisions provided by an AI-based application. We presented 50 clinical vignettes to the AI-based application, seven emergency medicine providers, and five internal medicine physicians. We compared the triage decisions of the AI-based application to those of the individual providers as well as their consensus decisions. When compared to the human clinicians’ consensus triage decisions, the AI-based application performed equal or better than individual human clinicians.”
    Artificial Intelligence-Based Application Provides Accurate Medical Triage Advice When Compared to Consensus Decisions of Healthcare Providers.  
    Delshad S et al.  
    Cureus 13(8): e16956. DOI 10.7759/cureus.16956 

  • Artificial Intelligence-Based Application Provides Accurate Medical Triage Advice When Compared to Consensus Decisions of Healthcare Providers.  
    Delshad S et al.  
    Cureus 13(8): e16956. DOI 10.7759/cureus.16956 
  • “From the summary document of each device, we extracted the following information about how the algorithm was evaluated: the number of patients enrolled in the evaluation study; the number of sites used in the evaluation; whether the test data were collected and evaluated concurrently with device deployment (prospective) or the test set was collected before device deployment (retrospective); and whether stratified performance by disease subtypes or across demographic subgroups was reported. Additionally, we assigned a risk level from 1 to 4 to each device (1 and 2 indicate low risk; 3 and 4 indicate high risk) according to guidelines from an FDA proposal. In total, we compiled 130 approved devices that met our review criteria. We present a compilation of all the devices, organized by body area, risk level, prospective/retrospective studies, and multi-site evaluation.”  
    How medical AI devices are evaluated:  limitations and recommendations from an  analysis of FDA approvals  
    Eric Wu et al.
    Nature Medicine | VOL 27 | April 2021 | 576–584 
  • "Almost all of the AI devices (126 of 130) underwent only retrospective studies at their submission, based on the FDA summaries. None of the 54 high-risk devices were evaluated by prospective studies. For most devices, the test data for the retrospective studies were collected from clinical sites before evaluation, and the endpoints measured did not involve a side-by-side comparison of clinicians’ performances with and without AI.”
    How medical AI devices are evaluated:  limitations and recommendations from an  analysis of FDA approvals  
    Eric Wu et al.
    Nature Medicine | VOL 27 | April 2021 | 576–584 
  • "Among the 130 devices we analyzed, 93 devices did not have publicly reported multi-site assessment included as a part of the evaluation study. Of the 41 devices with the number of evaluation sites reported, 4 devices were evaluated in only one site, and 8 devices were evaluated in only two sites. This suggests that a substantial proportion of approved devices might have been evaluated only at a small number of sites, which often tend to have limited geographic diversity.”
    How medical AI devices are evaluated:  limitations and recommendations from an  analysis of FDA approvals  
    Eric Wu et al.
    Nature Medicine | VOL 27 | April 2021 | 576–584 
  • "Evaluating the performance of AI devices in multiple clinical sites is important for ensuring that the algorithms perform well across representative populations. Encouraging prospective studies with comparison to standard of care reduces the risk of harmful overfitting and more accurately captures true clinical outcomes. Post-market surveillance of AI devices  is also needed for understanding and measurement of unintended outcomes and biases that are not detected in prospective, multi-center trials.”
    How medical AI devices are evaluated:  limitations and recommendations from an  analysis of FDA approvals  
    Eric Wu et al.
    Nature Medicine | VOL 27 | April 2021 | 576–584 
  • “Human image analysis is based upon pattern recognition . In medicine, radiologists use pattern recognition when making a diagnosis. It is the heart of the matter. Pattern recognition has two components: pattern learning and pattern matching. Learning is a training or educational process; radiology trainees are taught the criteria of a normal chest X-ray examination and observe hundreds of normal examinations, eventually establishing a mental pattern of “normal.” Matching involves decision-making; when an unknown chest film is presented for interpretation, the radiologist compares this unknown pattern to their “normal” pattern and makes a decision as to whether or not the case is normal, or by exclusion, abnormal.”
    Medical Image Analysis: Human and Machine
    Robert Nick Bryan et al.
    Acad Radiol 2020; 27:76–81
  • "For a computer algorithm to mimic the radiologist in daily practice, it too must incorporate thousands of widgets and vast quantities of diverse data. Such a task may not be impossible, but it does not seem eminent. Furthermore, a radiologist can, when necessary, switch from heuristics to the deliberative mode and “open” the box to explain why they made a particular diagnosis. This often involves the explication of associated KFs (mass effect) that may simultaneously be important for clinical management (decompression).”
    Medical Image Analysis: Human and Machine
    Robert Nick Bryan et al.
    Acad Radiol 2020; 27:76–81
  • “A computer using contemporary computational tools functionally resembling human behavior could, in theory, read in image data as it comes from the scanner, extract KFs, find matching diagnoses, and integrate both into a standardized radiology report. The computer could populate the report with additional quantitative data, including organ/lesion volumetrics and statistical probabilities for the differential diagnosis. We predict that within 10 years this conjecture will be reality in daily radiology practice, with the computer operating at the level of subspecialty fellows. Both will require attending oversight. A combination of slow and fast thinking is important for radiologists and computers.”
    Medical Image Analysis: Human and Machine
    Robert Nick Bryan et al.
    Acad Radiol 2020; 27:76–81
  • “A much better characterization of the foreseeable future may be that although the demand for radiology services is likely to increase, and in an environment in which health care  resources will continue to be limited, AI, properly used, could help radiologists manage an increase in the demand for radiological services more efficiently ensuring that as a specialty  we can continue to provide optimal care for our patients.”
    Integrating Artificial Intelligence into Radiologic Practice: A Look to the Future
    Bibb Allen, Keith Dreyer, Geraldine D. McGinty
    JACR 2019 (in press)
  • “Whatever happens in radiology, the process will be equally gradual, allowing radiologists to potentially transform from report generators to the central managers of our patients’ diagnostic information integrating data from a variety of sources all facilitated by AI. If we sit back and do nothing, there is a chance we could be marginalized by AI. On the other hand, if we play a leadership role in AI development, the best days for radiologists, our specialty, and our patients are yet to come. So please consider us AI optimists too—not that AI will replace radiologists, but that AI will make us better physicians.”
    Integrating Artificial Intelligence into Radiologic Practice: A Look to the Future
    Bibb Allen, Keith Dreyer, Geraldine D. McGinty
    JACR 2019 (in press)
  • “Furthermore, AI does not have to be perfect to be helpful. Not  all radiologists perform identically on every case we interpret, and for that matter neither will AI. If AI recognizes abnormalities not identified by all radiologists and at the same time all radiologists also find abnormalities not recognized by AI, then the combination of humans plus AI has great potential to improve care.”
    Integrating Artificial Intelligence into Radiologic Practice: A Look to the Future
    Bibb Allen, Keith Dreyer, Geraldine D. McGinty
    JACR 2019 (in press)
  • "Patient preference as a reason to doubt autonomous radiological care by machines is debatable, and patient preferences will likely evolve. However, we consider medical care not unlike air travel. As air travelers, we want the aircraft we are flying on to have all of the latest automated features and recognize that the planes can indeed fly themselves. However, when will the public be ready to fly on pilotless airplanes? The combination of a human pilot assisted by robust computer automation seems to be current public preference. The same could be said for automated health care.”
    Integrating Artificial Intelligence into Radiologic Practice: A Look to the Future
    Bibb Allen, Keith Dreyer, Geraldine D. McGinty
    JACR 2019 (in press)
  • "In conclusion, we agree that there is more “unfounded hype” offered by those who believe AI will autonomously perform diagnostic imaging 281 interpretations and replace other aspects of physician care. We contend AI will significantly impact the practice of radiology but not diminish the need for well-trained radiologists.”
    Integrating Artificial Intelligence into Radiologic Practice: A Look to the Future
    Bibb Allen, Keith Dreyer, Geraldine D. McGinty
    JACR 2019 (in press)
  • "Whatever happens in radiology, the process will be equally gradual, allowing radiologists to potentially transform from report generators to the central managers of our patients’ diagnostic information integrating data from a variety of sources all facilitated by AI. If we sit back and do nothing, there is a chance we could be marginalized by AI. On the other hand, if we play a leadership role in AI development, the best days for radiologists, our specialty, and our patients are yet to come."
    Integrating Artificial Intelligence into Radiologic Practice: A Look to the Future
    Bibb Allen, Keith Dreyer, Geraldine D. McGinty
    JACR 2019 (in press)
  • “All belong to the supervised learning category where a known example is paired with a known label and both are used to train, or teach, the algorithm. This algorithm can then generalize to new, never before seen data of a similar type and scope. Supervised learning requires the input of trained experts, such as radiologists, and is expensive and time-consuming. Semi-supervised learning builds on a set of supervised learning examples to automatically label new, unlabeled training data. Unsupervised learning is a set of techniques that take untrained, unlabeled data and group it; such as k-means clustering, autoencoders, and dimensionality reduction techniques such as principal component analysis.”
    Machine Learning Principles for Radiology Investigators
    Borstelmann SM
    Acad Radiol 2020; 27:13–25
  • “While DL algorithms have exceeded the capabilities of classical statistical machine learning techniques on specific imaging tasks like multiclass classification, they require a large amount of data to function, are computationally expensive, and can be relatively opaque. ML algorithms are sometimes an easier way to get to an equivalent result, and can be used in tandem with DL. An initial evaluation of a smaller preliminary dataset may not be possible with a complex DL algorithm, but the lesser data requirements of a ML algorithm may yield significant insights and guidance at an earlier stage of data aggregation and overall study design.”
    Machine Learning Principles for Radiology Investigators
    Borstelmann SM
    Acad Radiol 2020; 27:13–25

  • Tech Giants all have their Eyes on Healthcare. How does it affect you?
    Kasper Juul
    Medium
  • “Medical data in itself is exploding. According to IBM Watson, medical data is expected to double every 73 days by 2020. Furthermore, each person will generate enough health data in their lifetime to fill 300 million books. Physicians simply cannot keep up with the growing amount of information available. Therefore, there’s a huge need in structuring and understanding this data to build a new infrastructure for the future of healthcare.”
    Tech Giants all have their Eyes on Healthcare. How does it affect you?
    Kasper Juul
    Medium
  • “The transportation network company Uber has focused their health ambitions in Uber Health. The problem they’re trying to solve is decreasing no-shows. Every year, 3.6 million Americans miss doctor appointments due to unreliable transportation, and no-show rates are as high as 30% nationwide. The Uber Health solution allows healthcare organizations to provide reliable, comfortable transportation for patients. And while transportation barriers are common across the general population, these barriers are greatest for vulnerable populations, including patients with the highest burden of chronic disease.”
    Tech Giants all have their Eyes on Healthcare. How does it affect you?
    Kasper Juul
    Medium
  • Amazon aims to build a launchpad for healthcare payments and care delivery. Apple is doing this already with the Apple HealthKit. Alphabet is using their data structuring and AI capabilities to enable healthcare organizations to better understand the potential of their data. And if Uber’s grand vision is to become the preferred means of transportation for patients, you would expect integrations with multiple other technologies if there’s a need for quick assessment of a patient’s health.
    Tech Giants all have their Eyes on Healthcare. How does it affect you?
    Kasper Juul
    Medium
  • Innovative Partnerships
  • Google and Mayo Clinic
  • As the global expert in solving rare and complex disease, Mayo Clinic has a long history of excellence in healthcare and medical innovation. This rich legacy has long been supported by Mayo Clinic’s focus on innovation, research, and cutting-edge science. As healthcare increasingly embraces digital technology, the collection, management and analysis of complex healthcare data has become a critical factor in providing advanced care to patients worldwide. 
    With these factors in mind, Mayo Clinic has chosen to partner with Google to positively transform patient and clinician experiences, improve diagnostics and patient outcomes, as well as enable it to conduct unparalleled clinical research.
    This strategic partnership will combine Google’s cloud and AI capabilities and Mayo’s world-leading clinical expertise to improve the health of people—and entire communities—through the transformative impact of understanding insights at scale. Ultimately, we will work together to solve humanity’s most serious and complex medical challenges. 
    How Google and Mayo Clinic will transform the future of healthcare
  • “Many large capital corporations in the digital world including Microsoft (Microsoft Corp, Redmond, Washington, USA), Google (Menlo Park, California, USA), Apple (Apple Inc, Cupertino, California, USA), Facebook (Facebook, Inc, Menlo Park, California, USA), Baidu (Baidu Inc, Beijing, China), and Amazon (Amazon Inc, Seattle, Washington, USA) incorporate machine learning in their products.”

    Machine Learning in Radiology: 
Applications Beyond Image Interpretation 
Paras Lakhani et al.
J Am Coll Radiol (in press)
  • “Machine learning has been used across many industries, including banking and finance, manufacturing, marketing, and telecommunications. Some more common every day examples include e-mail spam filters, face recognition, search engines, speech recognition, and language translation. Many large capital corporations in the digital world including Microsoft (Microsoft Corp, Redmond, Washington, USA), Google (Menlo Park, California, USA), Apple (Apple Inc, Cupertino, California, USA), Facebook (Facebook, Inc, Menlo Park, California, USA), Baidu (Baidu Inc, Beijing, China), and Amazon (Amazon Inc, Seattle, Washington, USA) incorporate machine learning in their products.”

    
Machine Learning in Radiology: 
Applications Beyond Image Interpretation 
Paras Lakhani et al.
J Am Coll Radiol (in press)

Privacy Policy

Copyright © 2024 The Johns Hopkins University, The Johns Hopkins Hospital, and The Johns Hopkins Health System Corporation. All rights reserved.