google ads
Search

Everything you need to know about Computed Tomography (CT) & CT Scanning

Deep Learning: Deep Learning and Cardiothoracic Apps Imaging Pearls - Educational Tools | CT Scanning | CT Imaging | CT Scan Protocols - CTisus
Imaging Pearls ❯ Deep Learning ❯ Deep Learning and Cardiothoracic Apps

-- OR --

  • Accurate identification and localization of abnormalities from radiology images play an integral part in clinical diagnosis and treatment planning. Building a highly accurate prediction model for these tasks usually requires a large number of images manually annotated with labels and finding sites of abnormalities. In reality, however, such annotated data are expensive to acquire, especially the ones with location annotations. We need methods that can work well with only a small amount of location annotations. To address this challenge, we present a unified approach that simultaneously performs disease identification and localization through the same underlying model for all images. We demonstrate that our approach can effectively leverage both class information as well as limited location annotation, and significantly outperforms the comparative reference baseline in both classification and localization tasks. Thoracic Disease Identification and Localization with Limited Supervision Zhe Li et al arVIX March 2018 (in press)
  • “We propose a unified model that jointly models disease identification and localization with limited localization annotation data. This is achieved through the same underlying prediction model for both tasks. Quantitative and qualitative results demonstrate that our method significantly outperforms the state-of-the-art algorithm”
    Thoracic Disease Identification and Localization with Limited Supervision
    Zhe Li et al arVIX March 2018 (in press)

  • Thoracic Disease Identification and Localization with Limited Supervision
    Zhe Li et al
    arVIX March 2018 (in press)

  • Thoracic Disease Identification and Localization with Limited Supervision
    Zhe Li et al
    arVIX March 2018 (in press)
  • “To address these two issues, the authors made use of a large data set of 103489 chest radiographs obtained between 2007 and 2016 in 46712 patients. Only 5232 patients with 7390 radiographs had a BNP test value avail- able. This data set with BNP data was termed “labeled,” and the other data set without BNP data was termed “unlabeled.” In the labeled data set, BNP level was dichotomized at 100 ng/L, above which CHF was defined as present. The labeled data set was divided into a training data set (80% of the data) and a test data set (20% of the data).”
    Using a Deep Learning Network to Diagnose Congestive Heart Failure
    Ngo LH
    Radiology 2019; 00:1–2 •
    https://doi.org/10.1148/radiol.2018182341
  • Nevertheless, clearly the work of Seah et al is highly innovative and has wide applications in many different areas in medical imaging. The concept of GVR is in fact similar to the idea of counterfactuals used in causal inference studies. A GVR-generated deep learning neural network system (as nicely implemented in this study) would definitely improve over time as more labeled images, finer-resolution images, and improved machine learning algorithms become available. One can easily imagine having this system as an additional tool to assist radiologists in delivering better diagnostic information to their patients.
    Using a Deep Learning Network to Diagnose Congestive Heart Failure
    Ngo LH
    Radiology 2019; 00:1–2 •
    https://doi.org/10.1148/radiol.2018182341
  • BriefCase is a radiological computer aided triage and notification software indicated for use in the analysis of non-enhanced head CT images. The device is intended to assist hospital networks and trained radiologists in workflow triage by flagging and communication of suspected positive findings of pathologies in head CT images, namely Intracranial Hemorrhage (ICH) .
    BriefCase uses an artificial intelligence algorithm to analyze images and highlight cases with detected ICH on a standalone desktop application in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected ICH findings. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.
    The results of BriefCase are intended to be used in conjunction with other patient information and based on professional judgment to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.
  • “BriefCase uses an artificial intelligence algorithm to analyze images and highlight cases with detected ICH on a standalone desktop application in parallel to the ongoing care image interpretation. The user is presented standard of with notifications for cases with suspected ICH findings. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device. “
  • AI and Pathology in Lung Cancer
  • Visual inspection of histopathology slides is one of the main methods used by pathologists to assess the stage, type and subtype of lung tumors. Adenocarcinoma (LUAD) and squamous cell carcinoma (LUSC) are the most prevalent subtypes of lung cancer, and their distinction requires visual inspection by an experienced pathologist. In this study, we trained a deep convolutional neural network (inception v3) on whole-slide images obtained from The Cancer Genome Atlas to accurately and automatically classify them into LUAD, LUSC or normal lung tissue. The performance of our method is comparable to that of pathologists, with an average area under the curve (AUC) of 0.97. Our model was validated on independent datasets of frozen tissues, formalin-fixed paraffin-embedded tissues and biopsies. Furthermore, we trained the network to predict the ten most commonly mutated genes in LUAD. We found that six of them—STK11, EGFR, FAT1, SETBP1, KRAS and TP53—can be predicted from pathology images, with AUCs from 0.733 to 0.856 as measured on a held-out population. These findings suggest that deep-learning models can assist pathologists in the detection of cancer subtype or gene mutations.
  • In this study, we trained a deep convolutional neural network (inception v3) on whole-slide images obtained from The Cancer Genome Atlas to accurately and automatically classify them into LUAD, LUSC or normal lung tissue. The performance of our method is comparable to that of pathologists, with an average area under the curve (AUC) of 0.97. Our model was validated on independent datasets of frozen tissues, formalin-fixed paraffin-embedded tissues and biopsies. Furthermore, we trained the network to predict the ten most commonly mutated genes in LUAD. We found that six of them—STK11, EGFR, FAT1, SETBP1, KRAS and TP53—can be predicted from pathology images, with AUCs from 0.733 to 0.856 as measured on a held-out population. These findings suggest that deep-learning models can assist pathologists in the detection of cancer subtype or gene mutations.
  • Purpose: To develop and validate a deep learning–based automatic detection algorithm (DLAD) for malignant pulmonary nodules on chest radiographs and to compare its performance with physicians including thoracic radiologists.
    Conclusion: This deep learning–based automatic detection algorithm outperformed physicians in radiograph classification and nodule detection performance for malignant pulmonary nodules on chest radiographs, and it enhanced physicians’ performances when used as a second reader.
    Development and Validation of Deep Learning–based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs
    Ju Gang Nam et al. Radiology 2018; 00:1–11 • https://doi.org/10.1148/radiol.2018180237 
  • Materials and Methods: For this retrospective study, DLAD was developed by using 43 292 chest radiographs (normal radiograph– to–nodule radiograph ratio, 34 067:9225) in 34 676 patients (healthy-to-nodule ratio, 30 784:3892; 19 230 men [mean age, 52.8 years; age range, 18–99 years]; 15 446 women [mean age, 52.3 years; age range, 18–98 years]) obtained between 2010 and 2015, which were labeled and partially annotated by 13 board-certified radiologists, in a convolutional neural network. Radiograph classification and nodule detection performances of DLAD were validated by using one internal and four external data sets from three South Korean hospitals and one U.S. hospital. For internal and external validation, radiograph classification and nodule detection performances of DLAD were evaluated by using the area under the receiver operating characteristic curve (AUROC) and jackknife alternative free-response receiver-operating characteristic (JAFROC) figure of merit (FOM), respectively. An observer performance test involving 18 physicians, including nine board-certified radiologists, was conducted by using one of the four external validation data sets. Performances of DLAD, physicians, and physicians assisted with DLAD were evaluated and compared.
    Development and Validation of Deep Learning–based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs
    Ju Gang Nam et al.
    Radiology 2018; 00:1–11 • https://doi.org/10.1148/radiol.2018180237 
  • Results: According to one internal and four external validation data sets, radiograph classification and nodule detection performances of DLAD were a range of 0.92–0.99 (AUROC) and 0.831–0.924 (JAFROC FOM), respectively. DLAD showed a higher AUROC and JAFROC FOM at the observer performance test than 17 of 18 and 15 of 18 physicians, respectively (P , .05), and all physicians showed improved nodule detection performances with DLAD (mean JAFROC FOM improvement, 0.043; range, 0.006–0.190; P , .05).
    Development and Validation of Deep Learning–based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs
    Ju Gang Nam et al.
    Radiology 2018; 00:1–11 • https://doi.org/10.1148/radiol.2018180237 
  • Summary: Our deep learning–based automatic detection algorithm outper- formed physicians in radiograph classification and nodule detection performance for malignant pulmonary nodules on chest radiographs, and when used as a second reader, it enhanced physicians’ performances.
    Development and Validation of Deep Learning–based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs
    Ju Gang Nam et al.
    Radiology 2018; 00:1–11 • https://doi.org/10.1148/radiol.2018180237
  • Implications for Patient Care
    - Our deep learning–based automatic detection algorithm showed excellent detection performances on both a per-radiograph and per-nodule basis in one internal and four external validation data sets.
    - Our deep learning–based automatic detection algorithm demonstrated higher performance than the thoracic radiologist group.
    - When accompanied by our deep learning–based automatic detection algorithm, all physicians improved their nodule detection performances.

  • “The process of achieving value in terms of medical decision support does not remove the clinician or radiologist, but instead, provides easier access to information that might otherwise be inaccessible, inefficient, or difficult to integrate in real-time for the consulting physician. When this information is distilled in a way available to the radiologist, it becomes knowledge that can positively impact the clinician’s judgment in a personalized way in real-time.”


    Reinventing Radiology: Big Data and the Future of Medical Imaging 
Morris MA et al.
J Thorac Imaging 2018;33:4–16
  • “Many tools have been developed to risk stratify patients into categories of pretest probability for CAD by generalizing patients into low-risk, medium-risk, and high- risk categories. Examples such as the Diamond and Forrester method, the Duke Clinical Score, and the Framingham Risk Score incorporate prior clinical history of cardiac events, certain characteristics of the chest pain, family history, medical history, age, sex, and results of a lipid panel. Imaging findings have been used in this type of risk stratification as well, with coronary calcium scoring.”


    Reinventing Radiology: Big Data and the Future of Medical Imaging 
Morris MA et al.
J Thorac Imaging 2018;33:4–16
  • “Importantly for radiologists, machine learning algorithms can help address many problems in current-day radiology practices that do not involve image interpretation. Although much of the attention in the machine learning space has focused on the ability of machines to classify image findings, there are many other useful applications of machine learning that will improve efficiency and utilization of radiology practices today. Moreover, we may see a world where a symbiosis of subspecialty experts and machines lead to better care than could be provided by either one alone. Those practices that implement these technologies today are likely to better position themselves for the future.” 


    Machine Learning in Radiology: 
Applications Beyond Image Interpretation 
Paras Lakhani et al.
J Am Coll Radiol (in press)
© 1999-2019 Elliot K. Fishman, MD, FACR. All rights reserved.