google ads
Deep Learning: Deep Learning and Cardiothoracic Apps Imaging Pearls - Educational Tools | CT Scanning | CT Imaging | CT Scan Protocols - CTisus
Imaging Pearls ❯ Deep Learning ❯ Deep Learning and Cardiothoracic Apps

-- OR --

  • The applications of AI for imaging of CAD using CT span multiple arenas including plaque detection, plaque characterization, improving risk stratification, and clinical decision-making. AI has been applied to both CAC scoring and CCTA. Studies have shown the ability of AI to detect hemodynamically significant stenosis across a broad range of imaging parameters and its potential ability to decrease the time needed for analysis of images for stenosis. AI can significantly reduce the time to perform quantitative plaque analysis, an important barrier to its application in clinical practice.
    Coronary Artery Disease: Role of Computed Tomography and Recent Advances
    Elizabeth Lee et al
    Radiol Clin N Am - (2024 (in press)
  • “One trial of AI-based CCTA interpretation in patients with stable chest pain referred for ICA found a lower cost compared with conventional interpretation due to lower rates of referral for ICA, without impacting the occurrence of cardiac events. A comprehensive review of all potential applications for AI in CT imaging of CAD is beyond the scope of this review; however, AI is certain to change how imaging is ordered, performed, and interpreted in the near future.”  
    Coronary Artery Disease: Role of Computed Tomography and Recent Advances
    Elizabeth Lee et al
    Radiol Clin N Am - (2024 (in press)
  • Background: Chest radiography remains the most common radiologic examination, and interpretation of its results can be difficult.
    Purpose: To explore the potential benefit of artificial intelligence (AI) assistance in the detection of thoracic abnormalities on chest radiographs by evaluating the performance of radiologists with different levels of expertise, with and without AI assistance.
    Materials and Methods: Patients who underwent both chest radiography and thoracic CT within 72 hours between January 2010 and December 2020 in a French public hospital were screened retrospectively. Radiographs were randomly included until reaching 500 radiographs, with about 50% of radiographs having abnormal findings. A senior thoracic radiologist annotated the radiographs for five abnormalities (pneumothorax, pleural effusion, consolidation, mediastinal and hilar mass, lung nodule) based on the corresponding CT results (ground truth). A total of 12 readers (four thoracic radiologists, four general radiologists, four radiology residents) read half the radiographs without AI and half the radiographs with AI (ChestView; Gleamer). Changes in sensitivity and specificity were measured using paired t tests.
    Using AI to Improve Radiologist Performance in Detection of Abnormalities on Chest Radiographs
    Souhail Bennani et al.
    Radiology 2023; 309(3):e230860
  • Results: The study included 500 patients (mean age, 54 years Å} 19 [SD]; 261 female, 239 male), with 522 abnormalities visible on 241 radiographs. On average, for all readers, AI use resulted in an absolute increase in sensitivity of 26% (95% CI: 20, 32), 14% (95% CI: 11, 17), 12% (95% CI: 10, 14), 8.5% (95% CI: 6, 11), and 5.9% (95% CI: 4, 8) for pneumothorax, consolidation, nodule,  pleural effusion, and mediastinal and hilar mass, respectively (P < .001). Specificity increased with AI assistance (3.9% [95% CI: 3.2, 4.6], 3.7% [95% CI: 3, 4.4], 2.9% [95% CI: 2.3, 3.5], and 2.1% [95% CI: 1.6, 2.6] for pleural effusion, mediastinal and hilar mass, consolidation, and nodule, respectively), except in the diagnosis of pneumothorax (−0.2%; 95% CI: −0.36, −0.04; P = .01). The mean reading time was 81 seconds without AI versus 56 seconds with AI (31% decrease, P < .001).
    Conclusion: AI-assisted chest radiography interpretation resulted in absolute increases in sensitivity for all radiologists of various levels ofexpertise and reduced the reading times; specificity increased with AI, except in the diagnosis of pneumothorax.  
    Using AI to Improve Radiologist Performance in Detection of Abnormalities on Chest Radiographs
    Souhail Bennani et al.
    Radiology 2023; 309(3):e230860
  • Summary
    Artificial intelligence assistance can improve the detection accuracy of thoracic abnormalities on chest radiographs across radiologists with varying levels of expertise, leading to marked improvements in sensitivity and a reduction in interpretation time.
    Key Results
    ■ In a retrospective study of 500 patients who underwent chest radiography and thoracic CT for all abnormality types, artificial intelligence (AI)-assisted chest radiography interpretation resulted in increased sensitivity of 6%–26% (P < .001) for all readers, including thoracic radiologists, general radiologists, and radiology residents.
    ■ Mean reading time was 81 seconds without AI versus 56 seconds with AI (a decrease of 31%, P < .001), with a 17% reduction for radiographs with abnormalities versus a 38% reduction for radiographs with no abnormalities.
    Using AI to Improve Radiologist Performance in Detection of Abnormalities on Chest Radiographs
    Souhail Bennani et al.
    Radiology 2023; 309(3):e230860
  • Our results showed that AI assistance resulted in absolute increases in sensitivity for all readers of various levels of experience, including general radiologists and radiology residents, in detecting all five types of abnormalities on chest radiographs: from 5.3% for mediastinal and hilar mass to 25.3% for pneumothorax (P < .001). Specificity increased with AI assistance (from 2.1% [95% CI: 1.6, 2.6] for nodule to 3.9% [95% CI: 3.2, 4.6]), except in the diagnosis of pneumothorax (−0.2%; 95% CI: −0.36, −0.04; P = .01). Although unassisted thoracic radiologists outperformed unassisted general radiologists for the five abnormality types, assisted thoracic radiologists solely outperformed assisted general radiologists in the detection of consolidations (73.9% [95% CI: 67, 80] vs 70.5% [95% CI: 64, 77]; P = .01). Finally, the mean reading time was 81 seconds without AI versus 56 seconds with AI, for a 31% reduction (P < .001), with 17% reduction for radiographs with abnormalities and 38% reduction for radiographs with no abnormalities.
    Using AI to Improve Radiologist Performance in Detection of Abnormalities on Chest Radiographs
    Souhail Bennani et al.
    Radiology 2023; 309(3):e230860
  • ‘In regard to the impact of AI on reading time, there are conflicting data, with some reports citing a 10% reduction in reading time and others citing an increase of more than 100%. In our study, the 31% decrease in reading time was more important than previously reported. As in the study by Shin et al, we observed that the time saved in reading is greater for radiographs without abnormalities, which represent the majority of chest radiographs in clinical practice.”  
    Using AI to Improve Radiologist Performance in Detection of Abnormalities on Chest Radiographs
    Souhail Bennani et al.
    Radiology 2023; 309(3):e230860
  • Purpose: To evaluate the diagnostic efficacy of artificial intelligence (AI) software in detecting incidental pulmonary embolism (IPE) at CT and shorten the time to diagnosis with use of radiologist reading worklist prioritization.
    Conclusion: AI-assisted workflow prioritization of IPE on routine CT scans in oncology patients showed high diagnostic accuracy and significantly shortened the time to diagnosis in a setting with a backlog of examinations.  
    Artificial Intelligence Tool for Detection and Worklist Prioritization Reduces Time to Diagnosis of Incidental Pulmonary Embolism at CT
    Laurens Topff et al.
    Radiology: Cardiothoracic Imaging 2023; 5(2):e220163
  • Results: In total, 11 736 CT scans in 6447 oncology patients (mean age, 63 years ± 12 [SD]; 3367 men) were included. Prevalence of IPE was 1.3% (51 of 3837 scans), 1.4% (54 of 3920 scans), and 1.0% (38 of 3979 scans) for the respective time periods. The AI software detected 131 true-positive, 12 false-negative, 31 false-positive, and 11 559 true-negative results, achieving 91.6% sensitivity, 99.7% specificity, 99.9% negative predictive value, and 80.9% positive predictive value. During prospective evaluation, AI-based work list prioritization reduced the median DNT for IPE-positive examinations to 87 minutes (vs routine workflow of 7714 minutes and human triage of 4973 minutes). Radiologists’ missed rate of IPE was significantly reduced from 44.8% (47 of 105 scans) without AI to 2.6% (one of 38 scans) when assisted by the AI tool (P < .001).
    Artificial Intelligence Tool for Detection and Worklist Prioritization Reduces Time to Diagnosis of Incidental Pulmonary Embolism at CT
    Laurens Topff et al.
    Radiology: Cardiothoracic Imaging 2023; 5(2):e220163
  • “Consequently, long report turnaround times (TATs) pose a risk for delayed diagnosis of unsuspected critical findings. Artificial intelligence (AI) applications that can automatically detect critical findings at imaging can be used to prioritize scans in the reading worklist of the radiologist, with the aim of shortening time to diagnosis and communication with the treating physician. AI-based prioritization tools have been studied for use cases such as intracranial hemorrhage at CT, acute pathologic abnormalities on chest radiographs, and pulmonary embolism (PE) on dedicated CT pulmonary angiograms (CTPAs), with varying results.”
    Artificial Intelligence Tool for Detection and Worklist Prioritization Reduces Time to Diagnosis of Incidental Pulmonary Embolism at CT
    Laurens Topff et al.
    Radiology: Cardiothoracic Imaging 2023; 5(2):e220163
  • Key Points
    ■ Artificial intelligence (AI) software for detecting incidental pulmonary embolism (IPE) at chest CT in patients with cancer showed high diagnostic accuracy in a large sample of 11 736 scans (sensitivity, 91.6%; specificity, 99.7%; negative predictive value, 99.9%).
    ■ In a practice with a backlog of unreported examinations, AI-based worklist prioritization reduced the median detection and notification time of IPE in flagged scans from several days to 1.0 hour.
    ■ The missed rate of IPE was significantly reduced from 44.8% to 2.6% when radiologists were assisted by the AI tool (P < .001).  
    Artificial Intelligence Tool for Detection and Worklist Prioritization Reduces Time to Diagnosis of Incidental Pulmonary Embolism at CT
    Laurens Topff et al.
    Radiology: Cardiothoracic Imaging 2023; 5(2):e220163
  • “The intended use of the investigated AI software is limited to workflow triage and not diagnostics (32). Automated worklist prioritization can assist radiologists, who remain responsible for verifying flagged examinations and must be aware of possible false-negative findings. Our study showed that a considerable number of scans (315 of 4294 [7.3%]) were not analyzed by the deployed AI software due to issues with data retrieval and technical validation, resulting in delayed diagnosis of IPE in these patients. This demonstrates the importance of monitoring and improving the yield of scans analyzed by AI software after deployment.”  
    Artificial Intelligence Tool for Detection and Worklist Prioritization Reduces Time to Diagnosis of Incidental Pulmonary Embolism at CT
    Laurens Topff et al.
    Radiology: Cardiothoracic Imaging 2023; 5(2):e220163
  • “In conclusion, we demonstrated that commercially available AI software had high diagnostic accuracy in the detection of IPE on chest CT scans in patients with cancer and was effective in significantly reducing the time to diagnosis of positive examinations compared with the routine workflow in a setting with a backlog of unreported scans.”    
    Artificial Intelligence Tool for Detection and Worklist Prioritization Reduces Time to Diagnosis of Incidental Pulmonary Embolism at CT
    Laurens Topff et al.
    Radiology: Cardiothoracic Imaging 2023; 5(2):e220163

  • AI vs Sonographer for Cardiac Function
  • “Artificial intelligence (AI) has been developed for echocardiography1–3, although it has not yet been tested with blinding and randomization. Here we designed a blinded, randomized non-inferiority clinical trial (ClinicalTrials.gov ID: NCT05140642; no outside funding) of AI versus sonographer initial assessment of left ventricular ejection fraction (LVEF) to evaluate the impact of AI in the interpretation workflow. The primary end point was the change in the LVEF between initial AI or sonographer assessment and final cardiologist assessment, evaluated by the proportion of studies with substantial change (more than 5% change). From 3,769 echocardiographic studies screened, 274 studies were excluded owing to poor image quality. The proportion of studies substantially changed was 16.8% in the AI group and 27.2% in the sonographer group (difference of −10.4%, 95% confidence interval: −13.2% to −7.7%,P < 0.001 for non-inferiority, P < 0.001 for superiority).”
    Blinded, randomized trial of sonographer versus AI cardiac function assessment
    Bryan He et al.
    Nature. 2023 Apr 5. doi: 10.1038/s41586-023-05947-3.
  • “The mean absolute difference between final cardiologist assessment and independent previous cardiologist assessment was 6.29% in the AI group and 7.23% in the sonographer group (difference of −0.96%, 95% confidence interval: −1.34% to −0.54%, P < 0.001 for superiority). The AI-guided workflow saved time for both sonographers and cardiologists, and cardiologists were not able to distinguish between the initial assessments by AI versus the sonographer (blinding index of 0.088). For patients undergoing echocardiographic quantification of cardiac function, initial assessment of LVEF by AI was non-inferior to assessment by sonographers.”  
    Blinded, randomized trial of sonographer versus AI cardiac function assessment
    Bryan He et al.
    Nature. 2023 Apr 5. doi: 10.1038/s41586-023-05947-3.
  • “In summary, we found that an AI-guided workflow for the initial assessment of cardiac function in echocardiography was non-inferior and even superior to the initial assessment by the sonographer. Cardiologists required less time, substantially changed the initial assessment  less frequently and were more consistent with previous clinical assessments by the cardiologist when using an AI-guided workflow. This finding was consistent across subgroups of different demographic and imaging characteristics. In the context of an ongoing need for precision phenotyping, our trial results suggest that AI tools can improve efficacy as well as efficiency in assessing cardiac function. Next steps include studying the effect of AI guidance on cardiac function assessment across multiple centres.”
    Blinded, randomized trial of sonographer versus AI cardiac function assessment
    Bryan He et al.
    Nature. 2023 Apr 5. doi: 10.1038/s41586-023-05947-3.
  • PURPOSE Low-dose computed tomography (LDCT) for lung cancer screening is effective, although most eligible
    people are not being screened. Tools that provide personalized future cancer risk assessment could focus
    approaches toward those most likely to benefit. We hypothesized that a deep learning model assessing the entirevolumetric LDCT data could be built to predict individual risk without requiring additional demographic or
    clinical data.
    CONCLUSION Sybil can accurately predict an individual’s future lung cancer risk from a single LDCT scan to
    further enable personalized screening. Future study is required to understand Sybil’s clinical applications. Our
    model and annotations are publicly available.
    Sybil: A Validated Deep Learning Model to Predict Future Lung Cancer Risk From a Single Low-Dose Chest Computed Tomography
    Peter G. Mikhael et al.
    J Clin Oncol 2023 (in press)
  • “In the article that accompanies this editorial, Mikhael et al report that an artificial intelligence and deep learning model, called Sybil, may predict an individual’s future lung cancer risk after one baseline computed tomography chest scan. This model is an important first step toward a precision approach to lung cancer screening, but understanding who would truly benefit from this technology will require significantly more investment in prospective studies targeting groups with differing risk profiles.”
    The Intersection of Lung Cancer  creening, Radiomics, and Artificial Intelligence: Can One Scan Really Predict the Future Development of Lung Cancer?
    Gerard A. Silvestri and James R. Jett
    American Society of Clinical Oncology 2023 (in press)
  • “The goal of oncology is to provide the longest possible survival outcomes with the therapeutics that are currently available without sacrificing patients’ quality of life. In lung cancer, several data points over a patient’s diagnostic and treatment course are relevant to optimizing outcomes in the form of precision medicine, and artificial intelligence (AI) provides the opportunity to use available data from molecular information to radiomics, in combination with patient and tumor characteristics, to help clinicians provide individualized care. In doing so, AI can help create models to identify cancer early in diagnosis and deliver tailored therapy on the basis of available information, both at the time of diagnosis and in real time as they are undergoing treatment. The purpose of this review is to summarize the current literature in AI specific to lung cancer and how it applies to the multidisciplinary team taking care of these complex patients.”
    Integration of artificial intelligencein lung cancer: Rise of the machine
    Colton Ladbury et al.
    Cell Reports Medicine (2023), https://doi.org/10.1016/j.xcrm.2023.100933
  • “The use of AI to augment imaging technology has found success in several disciplines, including computer-aided detection and diagnosis (CAD), convolutional neural networks (CNNs), and radiomics.CAD systems are typically standalone with a unified goal of detection or diagnosis of disease. At its core, it is simply trying to aid practitioners with identification of disease, with primary focus on that binary outcome. The field of radiomics seeks to use medical imaging to generate high-dimensional quantitative data, which can in turn be used for analysis that seeks to better understand the underlying characteristics of disease.Radiomics is inherently meant to support the overall diagnosis and management of patients at any point in the imaging workflow and can be combined with other patient characteristics to produce powerful support tools, and therefore can be considered a natural extension of CAD.”
    Integration of artificial intelligencein lung cancer: Rise of the machine
    Colton Ladbury et al.
    Cell Reports Medicine (2023), https://doi.org/10.1016/j.xcrm.2023.100933
  • In addition to assisting with identifying lung cancers, AI can also help predict oncologic outcomes overall and who will respond to therapy. Predicting outcomes including locoregional and distant recurrence, progression-free survival, and overall survival (OS) can be challenging, given that factors that influence these outcomesare multivariate. Imaging features are no doubt highly relevant, but these must be combined with patient and tumor.  
    Integration of artificial intelligence in lung cancer: Rise of the machine
    Colton Ladbury et al.
    Cell Reports Medicine (2023), https://doi.org/10.1016/j.xcrm.2023.100933
  • “AI has also been applied to treatment decision making. A clinical decision support system (CDSS) is a tool to assist physicians in making clinical decisions on the basis of analyses of multiple data points on a particular patient. Watson for Oncology (WFO) is one example of a CDSS that has been applied to the treatment management of lung cancer. A study comparing decisions made by WFO to a multidisciplinary team found relatively high concordance in recommendations for early stage and metastatic disease (92.4%–100%) but lower rates of concordance in stage II or III (80.8%–84.6%).51 Therefore, although there is room for improvement for decision support, these tools will be critical for standardizing lung cancer treatment across available treatment options and disciplines, thereby enhancing outcomes.”
    Integration of artificial intelligence in lung cancer: Rise of the machine
    Colton Ladbury et al.
    Cell Reports Medicine (2023), https://doi.org/10.1016/j.xcrm.2023.100933
  • “Surgical resection is standard of care for management of localized lung cancer. Extent of surgery depends on several factors, including disease progression and patient eligibility. When possible, lobectomy has been established as standard, with improved disease control and/or survival compared with smaller wedge resections63 and larger pneumonectomies.64 Furthermore, the mortality rate of lobectomies is 2.3% compared with 6.8% with pneumonectomies. However, not every patient will be a candidate for lobectomy, because of factors such as medical history, smoking history, and lung function. AI offers an opportunity to better risk-stratify patients to come up with an optimal treatment plan, which might also include no surgery at all if risk is too high.”
    Integration of artificial intelligence in lung cancer: Rise of the machine
    Colton Ladbury et al.
    Cell Reports Medicine (2023), https://doi.org/10.1016/j.xcrm.2023.100933
  • “Although AI is clearly an invaluable tool to the multidisciplinary lung cancer care team, several barriers remain to its widespread implementation and availability. First, AI relies heavily on data, and data acquisition and organization continue to be a challenge that AI will need to overcome. Efforts will optimally focus on ways of efficiently extracting EMR data to create large databases for AI research. Sample size is important in AI research, as it must be sufficiently large to train, test and validate models. Presently, most outcomes-based research studies include relatively small numbers of patients (between tens and hundreds of patients) that are somewhat heterogeneous as far as patient demographics, genomics, and imaging features are concerned. Though it is sometimes possible to perform AI analyses on datasets of that size, sample sizes in the thousands might be required for many applications. Otherwise, models may be inaccurate, poorly generalizable, and not applicable or reproducible to clinical outcomes. Additionally, although the EMR system has provided the opportunity to extract data into models for AI-based research, a number of variables are recorded as free text, which cannot directly be extracted for data analysis.”
    Integration of artificial intelligence in lung cancer: Rise of the machine
    Colton Ladbury et al.
    Cell Reports Medicine (2023), https://doi.org/10.1016/j.xcrm.2023.100933
  • “The present is an exciting time for lung cancer treatment, as the available treatment options, and the precision with which we can select them, have improved dramatically in recent years. However, these increasingly tailored treatment options are accompanied by a need for data to inform clinical decisions, and therefore a need to be able to make sense of large volumes of data throughout a hypothetical patients’ treatment course. The overarching field of AI, inclusive of ML, NNs, DL, NLP, XAI, and other domains and methodologies, offers a promising avenue for improving all aspects of lung cancer management with datadriven approaches. Advances in radiomics allow us to derive additional value from existing diagnostic imaging, while ML algorithms help with optimizing treatment selection. Although there are limitations to AI and challenges as discussed, with large databases and suitable platforms AI research will continue to grow and become more reproducible, accurate, and applicable. With the rise in AI-based research over the past decade and increasing interest toward AI in the oncology community, including young trainees, AI-based interventions in lung cancer management will play a key role in the future.”
    Integration of artificial intelligence in lung cancer: Rise of the machine
    Colton Ladbury et al.
    Cell Reports Medicine (2023), https://doi.org/10.1016/j.xcrm.2023.100933 
  • “Clinicians often encounter discrepant measurements of the ascending aorta that impede, complicate, and impair appropriate clinical assessment—including key issues of presence or absence of aortic growth, rate of growth, and need for surgical intervention. These discrepancies may arise within a single modality (computed tomography scan, magnetic resonance imaging, or echocardiography) or between modalities. The authors explore the origins and significance of these discrepancies, revealing that some “truth” usually underlies all the discrepant measurements, which individually look at the ascending aorta with different perspectives and dimensional definitions.”
    Discrepancies in Measurement of the Thoracic Aorta
    John A. Elefteriades et al.
    J A C C  V OL . 7 6 , NO . 2 , 2 0 2 0 : 2 0 1 – 1 7
  • “Aortic measurements can vary substantially between image sets done with or without gating, reflecting the variation of aortic size in the different phases of the aortic cycle. Often (probably, most commonly), ascending aortic aneurysms are identified incidentally on scans done for other reasons. Such scans will often have been done nongated. It would be helpful to have the notification “nongated” routinely included in the official radiographic report.”
    Discrepancies in Measurement of the Thoracic Aorta
    John A. Elefteriades et al.
    J A C C  V OL . 7 6 , NO . 2 , 2 0 2 0 : 2 0 1 – 1 7
  • “USE OF CORONAL IMAGE FOR AORTIC ROOT. The maximal deep sinus to deep sinus dimension can be approximated on the coronal (or sagittal) images by simple hand techniques. One takes the largest diameter that will fit in the aortic root zone on the coronal films. We feel that this dimension has clinical meaning. Furthermore, reading the “fattest” diameter on the coronal images obviates the issue in centerline techniques of identifying the proper caudal to cranial plane along the centerline for measurements to be taken. The maximal diameter is vividly apparent on simple coronal images. Furthermore, as we will see in the next section, the deep sinus to deep sinus dimension resembles very closely the methodology of measurement of aortic root dimension by echocardiography. Echo technicians are  taught to orient the echo beam precisely to capture the maximum transverse dimension.”
    Discrepancies in Measurement of the Thoracic Aorta
    John A. Elefteriades et al.
    J A C C  V OL . 7 6 , NO . 2 , 2 0 2 0 : 2 0 1 – 1 7
  • ASCENDING AORTA PROPER. Let us consider first the ascending aorta proper (above the sinotubular junction). For most aortas, which have undergone little lengthening and, thus, little curvature, measurements by simple diameter on axial images will differ very little from double-oblique computerized assessments. When the ascending aorta has elongatedconsiderably, and, thus, become curved, obliquity of the axial plane in respect to the centerline of the aorta will introduce a moderate discrepancy of several millimeters; the axial measurements will “overestimate” the diameter relative to the double oblique measurements. For the practitioner, a simple supplemental diameter measurement on the coronal images will correct for this obliquity.”
    Discrepancies in Measurement of the Thoracic Aorta
    John A. Elefteriades et al.
    J A C C  V OL . 7 6 , NO . 2 , 2 0 2 0 : 2 0 1 – 1 7
  • “In a large sociodemographically diverse cohort of patients with TAA, absolute risk of aortic dissection was low but increased with larger aortic sizes after adjustment for potential confounders and competing risks. Our data support current consensus guidelines recommending prophylactic surgery in nonsyndromic individuals with TAA at a 5.5-cm threshold.”
    Association of Thoracic Aortic Aneurysm Size With Long-term Patient Outcomes The KP-TAA Study
    Matthew D. Solomon et al.
    JAMA Cardiol. 2022;7(11):1160-1169
  • Key Points
    Question: What is the risk of aortic dissection (AD) and all-cause death for nonsyndromic patients with unrepaired ascending thoracic aortic aneurysm (TAA), overall and by TAA size?
    Findings: In this cohort study, the overall absolute risk of AD was low. Although the risk of AD and all-cause death was associated with larger aortic sizes, there was an inflection point at 6.0 cm.
    Meaning: The findings in this study support consensus guidelines recommending surgical intervention at 5.5 cm in nonsyndromic patients with TAA; earlier prophylactic surgery should be done only selectively in the nonsyndromic population, given the nontrivial risks associated with aortic surgery.
    Association of Thoracic Aortic Aneurysm Size With Long-term Patient Outcomes The KP-TAA Study
    Matthew D. Solomon et al.
    JAMA Cardiol. 2022;7(11):1160-1169
  • “We identified a large sociodemographically diverse cohort of more than 6300 nonsyndromic adults with TAA, which included substantial follow-up with patients with all TAA sizes,  including large aortic sizes that have been previously understudied. Larger aortic size was associated with higher risk of AD and all-cause death after adjustment for potential confounders and competing risks, but absolute risk ofADwas low in the overall cohort with an inflection point at 6.0 cm, supporting current guidelines recommending surgery at 5.5 cm. Earlier prophylactic surgery should be considered selectively in nonsyndromic patients with TAA, given the nontrivial risks associated with aortic surgery.”
    Association of Thoracic Aortic Aneurysm Size With Long-term Patient Outcomes The KP-TAA Study
    Matthew D. Solomon et al.
    JAMA Cardiol. 2022;7(11):1160-1169
  • “To date, there are several commercial AI programs for lung nodule detection (on chest X-ray and on chest CT) and segmentation (on chest CT). A comprehensive overview of currently available AI software can be consulted on the website AIforRadiology.com, which collects software characteristics and associated publications, with the aim of increasing transparency in this domain. On the other hand,commercial software for classification is still lacking, for the reasons developed above. While there is no doubt that DL has great abilities in the field of medical imaging, further work is still needed at distinct levels, regarding the application of standardization guidelines for building and reporting DL-based studies, the availability of larger, high-quality datasets, the evaluation in terms of clinical outcome of AI tools that are already commercially available, the adaptation of legal and ethical frameworks with the issue of liability. Most of all, radiologists, who may have the feeling of receiving insufficient information about AI must take a central role in DL tools development and implementation, by identifying the optimal workflows for their use. With the support of DL, radiologists could thus use the time gain for higher added values tasks, such as integration of the radiological findings in the overall clinical management, going beyond the mere image analysis .”
    Artificial intelligence: A critical review of applications for lung nodule and lung cancer
    Constance de Margerie-Mellona,*, Guillaume Chassagnonc,
    Diagnostic and Interventional Imaging 000 (2022) 1−7
  • BACKGROUND. CT-based body composition (BC) measurements have historically been too resource intensive to analyze for widespread use and have lacked robust comparison with traditional weight metrics for predicting cardiovascular risk.
    OBJECTIVE. The aim of this study was to determine whether BC measurements obtained from routine CT scans by use of a fully automated deep learning algorithm could predict subsequent cardiovascular events independently from weight, BMI, and additional cardiovascular risk factors.
    CONCLUSION. VFA derived from fully automated and normalized analysis of abdominal CT examinations predicts subsequent MI or stroke in Black and White patients, independent of traditional weight metrics, and should be considered an adjunct to BMI in risk models.
    CLINICAL IMPACT. Fully automated and normalized BC analysis of abdominal CT has promise to augment traditional cardiovascular risk prediction models.
    Utility of Normalized Body Composition Areas, Derived From Outpatient Abdominal CT Using a Fully Automated Deep Learning Method, for Predicting Subsequent Cardiovascular Events
    Kirti Magudia,et al.
    AJR 2023; 220:1–9
  • METHODS. This retrospective study included 9752 outpatients (5519 women and4233 men; mean age, 53.2 years; 890 patients self-reported their race as Black and 8862single health system from January 2012 through December 2012 and who were given no major cardiovascular or oncologic diagnosis within 3 months of undergoing CT. Using fully automated deep learning BC analysis was performed at the L3 vertebral body level to determine three BC areas (skeletal muscle area [SMA], visceral fat area [VFA], and subcutaneous fat area [SFA]). Age-, sex-, and race-normalized reference curves were used to generate z scores for the three BC areas. Subsequent myocardial infarction (MI) or stroke was determined from the electronic medical record. Multivariable- adjusted Cox proportional hazards models were used to determine hazard ratios (HRs) for MI or stroke within 5 years after CT for the three BC area z scores, with adjustment for normalized weight, normalized BMI, and additional cardiovascular risk factors smoking status, diabetes diagnosis, and systolic blood pressure).
    Utility of Normalized Body Composition Areas, Derived From Outpatient Abdominal CT Using a Fully Automated Deep Learning Method, for Predicting Subsequent Cardiovascular Events
    Kirti Magudia,et al.
    AJR 2023; 220:1–9
  • Key Finding
    - After normalization for age, sex, and race, VFA from routine CT was associated with risk of MI and stroke (HR, 1.31 [95% CI, 1.03–1.67] and 1.46 [95% CI, 1.07–2.00], both p = .04 for overall effect) in multivariable models in Black and White patients; weight, BMI, SMA, and SFA were not. Importance 
    - VFA from automated CT analysis predicts MI and stroke, independent of traditional weight metrics, and may serve as an adjunct to BMI in risk models.  
    Utility of Normalized Body Composition Areas, Derived From Outpatient Abdominal CT Using a Fully Automated Deep Learning Method, for Predicting Subsequent Cardiovascular Events
    Kirti Magudia,et al.
    AJR 2023; 220:1–9
  • Objective: To assess the diagnostic performance of an AI algorithm for detection of iPE on conventional con-trast-enhanced chest CT examinations. Results: Based on the adjudication process, the frequency of iPE was 1.3% (40/3003). AI detected 4 iPEs missed by clinical reports, and clinical reports detected 7 iPEs missed by AI. AI, compared with clinical reports, exhibited significantly lower specificity (92.7% vs 99.8%, p=.045) and PPV (86.8% vs 97.3%, p=.03), but no significant difference in sensitivity (82.5% vs 90.0%, p=.37) or NPV (99.8% vs 99.9%, p=.36). For AI, neither sensitivity nor specificity varied significantly in association with age, sex, examination location, or cancer-related clinical scenario (all p>.05). Explanations of false positives by AI included metastatic lymph nodes and pulmonary venous filling defect, and of false negatives by AI included surgically altered anatomy and small-caliber subsegmental vessel.
    Detection of Incidental Pulmonary Embolism on Conventional Contrast- Enhanced Chest CT: Comparison of an Artificial Intelligence Algorithm and Clinical Reports
    Kiran Batra et al.
    AJR 2022 Jul 13 [published online]. doi:10.2214/AJR.22.2789
  • Results: Based on the adjudication process, the frequency of iPE was 1.3% (40/3003). AI detected 4 iPEs missed by clinical reports, and clinical reports detected 7 iPEs missed by AI. AI, compared with clinical reports, exhibited significantly lower specificity (92.7% vs 99.8%, p=.045) and PPV (86.8% vs 97.3%, p=.03), but no significant differ-ence in sensitivity (82.5% vs 90.0%, p=.37) or NPV (99.8% vs 99.9%, p=.36). For AI, neither sensitivity nor specificity varied significantly in association with age, sex, examination location, or cancer-related clinical scenario (all p>.05). Explanations of false positives by AI included metastatic lymph nodes and pulmonary venous filling defect, and of false negatives by AI included surgically altered anatomy and small-caliber subsegmental vessel.
    Conclusion: AI had high NPV and moderate PPV for iPE detection, detecting some iPEs missed by radiologists.
    Clinical Impact: Potential applications of the AI tool include serving as a second reader to help detect additional iPEs or as a worklist triage tool to allow earlier iPE detection and intervention. Various explanations of AI misclassifications may provide targets for model improvement.  
    Detection of Incidental Pulmonary Embolism on Conventional Contrast- Enhanced Chest CT: Comparison of an Artificial Intelligence Algorithm and Clinical Reports
    Kiran Batra et al.
    AJR 2022 Jul 13 [published online]. doi:10.2214/AJR.22.2789
  • KEY FINDING: A commercial AI tool had NPV of 99.8% and PPV of 86.7% for detection of iPE on conventional contrast-enhanced chest CT examinations. Of 40 iPEs present in the study sample, 7 were detected only by the clinical reports, and 4 were detected only by AI.  IMPORTANCE:  The AI tool has potential to serve as a second reader or as a worklist triage tool, to facilitate reliable and rapid iPE detection.
    Detection of Incidental Pulmonary Embolism on Conventional Contrast- Enhanced Chest CT: Comparison of an Artificial Intelligence Algorithm and Clinical Reports
    Kiran Batra et al.
    AJR 2022 Jul 13 [published online]. doi:10.2214/AJR.22.2789
  • “In conclusion, a commercial AI tool had 99.8% NPV and 86.8% PPV for detection of iPE on conventional contrast-enhanced chest CT examinations (i.e., non-CTPA protocols). Both the AI tool and clinical reports detected iPEs missed by the other method. The diagnostic performance of the AI tool did not show significant variation across study subgroups. Various explanations of misclassifications by the AI tool (both false positives and false negatives) were identified, to provide targets for model improvement. Potential clinical applications of the AI tool include serving as a second reader to help detect additional iPEs missed by radiologists or as a worklist triage and prioritization tool to allow earlier iPE detection and intervention.”  
    Detection of Incidental Pulmonary Embolism on Conventional Contrast- Enhanced Chest CT: Comparison of an Artificial Intelligence Algorithm and Clinical Reports
    Kiran Batra et al.
    AJR 2022 Jul 13 [published online]. doi:10.2214/AJR.22.2789
  • “Our objective was to develop deep learning models with chest radiograph data to predict healthcarecosts and classify top-50% spenders. 21,872 frontal chest radiographs were retrospectively collected from 19,524 patients with at least 1-year spending data. Among the patients, 11,003 patientshad 3 years of cost data, and 1678 patients had 5 years of cost data. Model performances were measured with area under the receiver operating characteristic curve (ROC-AUC) for classification of top-50% spenders and Spearman ρ for prediction of healthcare cost. The best model predicting 1-year (N = 21,872) expenditure achieved ROC-AUC of 0.806 [95% CI 0.793–0.819] for top-50% spender classification and ρ of 0.561 [0.536–0.586] for regression. Similarly, for predicting 3-year (N = 12,395) expenditure, ROC-AUC of 0.771 [0.750–0.794] and ρ of 0.524 [0.489–0.559]; for predicting 5-year (N = 1779) expenditure ROC-AUC of 0.729 [0.667–0.729] and ρ of 0.424 [0.324–0.529]. Our deep learning model demonstrated the feasibility of predicting health care expenditure as well as classifying top 50% healthcare spenders at 1, 3, and 5 year(s), implying the feasibility of combining deep learning with information-rich imaging data to uncover hidden associations that may allude to physicians. Such a model can be a starting point of making an accurate budget in reimbursement models in healthcare industries.”
    Prediction of future health care expenses of patients from chest radiographs using deep learning: a pilot study
    Jae Ho Sohn et al.
    Nature (2022) 12:8344
  • “Our deep learning model demonstrated the feasibility of predicting health care expenditure as well as classifying top 50% healthcare spenders at 1, 3, and 5 year(s), implying the feasibility of combining deep learning with information-rich imaging data to uncover hidden associations that may allude to physicians. Such a model can be a starting point of making an accurate budget in reimbursement models in healthcare industries.”
    Prediction of future healthcare expenses of patients from chestradiographs using deep learning: a pilot study
    Jae Ho Sohn et al.
    Nature (2022) 12:8344
  • “We demonstrated the feasibility of predicting healthcare costs and classifying top-50% spenders by using deep learning models based on chest radiographs (CXR) that are widely available in clinics and hospitals. The models were developed to identify patients who are likely to incur high healthcare expenditure and predict their subsequent amount of healthcare spending within 1, 3, and 5 years. Unlike physicians who are trained to identify only a handful of imaging biomarkers known to medical literature, our deep learning algorithm is able to take into account thousands of imaging features of weak to moderate correlations with healthcare spending as presented in the training set. When a CXR is evaluated by the deep learning algorithm, its pixels are aggregated, transformed, and passed through many layers of filters with each layer extracting different lines, angles, patterns, and associations. As those extracted features are then passed upstream to higher-level filters, they are compared to the thousands of CXR that the algorithm was trained on. All these numbers finally converge to the estimated to the estimated cost. Considering that CXR tends to be standardized, deep learning algorithms are trained to be extremely sensitive to details that clinical radiologists may not typically recognize.”
    Prediction of future healthcare expenses of patients from chest radiographs using deep learning: a pilot study
    Jae Ho Sohn et al.
    Nature (2022) 12:8344
  • Conclusion
    “We demonstrated the potential of deep learning algorithms to predict 1,3, and 5-years patient healthcare expenditure based on a frontal chest radiograph even in the absence of additional clinical information. This study confirms that radiological imaging indeed contains rich information that may not be routinely extracted by human radiologists but can be analyzed by the power of big data and deep learning. Successfully predicting healthcare expenditure can potentially be an important first step towards improving health policy and medical interventions to address patient care and societal costs.”
    Prediction of future healthcare expenses of patients from chest radiographs using deep learning: a pilot study
    Jae Ho Sohn et al.
    Nature (2022) 12:8344 
  • AI: The Problems and the Challenges
    - Reproducibility of data results
    - Studies designed to solve a limited problem
    - Limited datasets size and from select populations
    - Unintentional errors in calculations
  • Background: Visual assessment remains the standard for evaluating emphysema at CT; however, it is time consuming, is subjective,requires training, and is affected by variability that may limit sensitivity to longitudinal change.
    Purpose: To evaluate the clinical and imaging significance of increasing emphysema severity as graded by a deep learning algorithmon sequential CT scans in cigarette smokers.
    Materials and Methods: A secondary analysis of the prospective Genetic Epidemiology of Chronic Obstructive Pulmonary Disease COPDGene) study participants was performed and included baseline and 5-year follow-up CT scans from 2007 to 2017. Emphysema was classified automatically according to the Fleischner emphysema grading system at baseline and 5-year follow-up using a deep learning model. Baseline and change in clinical and imaging parameters at 5-year follow-up were compared in participants whose emphysema progressed versus those who did not. Kaplan-Meier analysis and multivariable Cox regression were used to assess the relationship between emphysema score progression and mortality.
    Results: A total of 5056 participants (mean age, 60 years 6 9 [SD]; 2566 men) were evaluated. At 5-year follow-up, 1293 of the 5056 participants (26%) had emphysema progression according to the Fleischner grading system. This group demonstrated progressive airflow obstruction (forced expiratory volume in 1 second [percent predicted]: –3.4 vs –1.8), a greater decline in 6-minute walk distance (–177 m vs –124 m), and greater progression in quantitative emphysema extent (adjusted lung density: –1.4 g/L vs 0.5 g/L; percentage of lung voxels with CT attenuation less than 2950 HU: 0.6 vs 0.2) than those with nonprogressive emphysema(P , .001 for each). Multivariable Cox regression analysis showed a higher mortality rate in the group with emphysema progression,with an estimated hazard ratio of 1.5 (95% CI: 1.2, 1.8; P , .001).
    Conclusion: An increase in Fleischner emphysema grade on sequential CT scans using an automated deep learning algorithm was associated with increased functional impairment and increased risk of mortality
  • Summary
    Emphysema progression on CT scans scored using a deep learning algorithm was associated with increased functional impairment and mortality at 5-year follow-up.  
    Key Results
    • A deep learning algorithm was used to classify emphysema at baseline and 5-year follow-up in 5056 participants.
    • Of the 5056 participants, 1293 (26%) had an increase in emphysema grade at 5 years; these participants had progressive airflow obstruction, greater decline in 6-minute walk distance, and greater progression in emphysema extent than those with nonprogressive emphysema (P , .001 for each).
    • Emphysema progression was associated with an increased mortality(hazard ratio: 1.5, P , .001).
    Emphysema Progression at CT by Deep Learning Predicts Functional Impairment and Mortality: Results from the COPDGene Study
    Andrea S. Oh et al.  
    Radiology 2022; 000:1–8 (in press)
  • “In conclusion, we applied a previously validated deep learning algorithm that automatically classifies emphysema pattern at CT according to the Fleischner classification system and demonstrated that an increase in emphysema severity score at 5 years was an independent predictor of diseaseprogression and mortality. These results suggest the clinical value of automatic, structured grading of emphysema severity at CT for identification of patients at greater risk. Possible applications include lung health assessments at lung cancer screening or entry criteria for clinical trials.”
    Emphysema Progression at CT by Deep Learning Predicts Functional Impairment and Mortality: Results from the COPDGene Study
    Andrea S. Oh et al.  
    Radiology 2022; 000:1–8 (in press)
  • "KD is associated with mucocutaneous lymph node syndrome and predominantly affects medium and small arteries in infants and children less than 5 years of age. It is more prevalent in Asian populations and has a male dominance.”
    Radiologic Imaging in Large and Medium Vessel Vasculitis
    Weinrich JM et al.
    Radiol Clin N Am 58 (2020) 765–779
  • “The coronary arteries are often involved in KD and coronary artery aneurysms develop as a result of coronary vasculitis in about 15% to 25% of untreated patients. Coronary artery aneurysms can be classified according to their size (small, <5 mm; medium, 5–8 mm; and large, >8 mm) and shape (saccular or fusiform). Large coronary artery aneurysms are associated with a higher risk of complications such as rupture, thrombosis, and stenosis, which possibly lead to myocardial infarction and death."
    Radiologic Imaging in Large and Medium Vessel Vasculitis
    Weinrich JM et al.
    Radiol Clin N Am 58 (2020) 765–779
  • “The U.S. health care industry is structured on the historically necessary model of in-person interactions between patients and their clinicians. Clinical workflows and economic incentives have largely been developed to support and reinforce a face-to-face model of care, resulting in the congregation of patients in emergency departments and waiting areas during this crisis. This care structure contributes to the spread of the virus to uninfected patients who are seeking evaluation.”
    Covid-19 and Health Care’s Digital Revolution
    Sirina KeeIsara, Andrea Jonas,Kevin Schulman
    N Engl J Med. 2020 Apr 2. doi: 10.1056/NEJMp2005835.
  • Purpose: This study sought to establish a robust and fully automated Type B aortic dissection (TBAD) segmentation method by leveraging the emerging deep learning techniques.
    Conclusion: Deep learning-based model provides a promising approach for accurate and efficient segmentation of TBAD and makes it possible for automated measurements of TBAD anatomical features.
    Fully automatic segmentation of type B aortic dissection from CTA images T enabled by deep learning
    Long Cao et al.
    European Journal of Radiology 121 (2019) 108713

  • Fully automatic segmentation of type B aortic dissection from CTA images T enabled by deep learning
    Long Cao et al.
    European Journal of Radiology 121 (2019) 108713
  • Using the proposed deep learning-based models and workflow , we successfully demonstrated the feasibility of automatic TBAD segmentation. First, based on high-quality ground truth annotated by experts, we revealed that a CNN network with a multi-task output is capable of segmenting the TBAD into the whole aorta, TL, and FL simultaneously. CNN3 achieved the best DCSs of (0.93 ± 0.01, 0.93 ± 0.01, and 0.91 ± 0.02 for the whole aorta, TL, and FL, respectively), and calculated aortic lumen volumes that closely corre- spond to those of manual segmentation with only 0.038 ± 0.006 s per slice. These results indicate a significant step forward towards the automated measurement of TBAD.
    Fully automatic segmentation of type B aortic dissection from CTA images T enabled by deep learning
    Long Cao et al.
    European Journal of Radiology 121 (2019) 108713
  • Background: Deep learning has the potential to augment the use of chest radiography in clinical radiology, but challenges include poor generalizability, spectrum bias, and difficulty comparing across studies.
    Purpose: To develop and evaluate deep learning models for chest radiograph interpretation by using radiologist-adjudicated reference standards.
    Conclusion: Expert-level models for detecting clinically relevant chest radiograph findings were developed for this study by using adjudicated reference standards and with population-level performance estimation. Radiologist-adjudicated labels for 2412 ChestX- ray14 validation set images and 1962 test set images are provided.
    Chest Radiograph Interpretation with Deep Learning Models: Assessment with Radiologist-adjudicated Reference Standards and Population-adjusted Evaluation
    Anna Majkowska et al.
    Radiology 2019; 00:1–11 • https://doi.org/10.1148/radiol.2019191293
  • Materials and Methods: Deep learning models were developed to detect four findings (pneumothorax, opacity, nodule or mass, and fracture) on frontal chest radiographs. This retrospective study used two data sets. Data set 1 (DS1) consisted of 759 611 images from a multicity hospital network and ChestX-ray14 is a publicly available data set with 112 120 images. Natural language process- ing and expert review of a subset of images provided labels for 657 954 training images. Test sets consisted of 1818 and 1962 images from DS1 and ChestX-ray14, respectively. Reference standards were defined by radiologist-adjudicated image review. Performance was evaluated by area under the receiver operating characteristic curve analysis, sensitivity, specificity, and positive predictive value. Four radiologists reviewed test set images for performance comparison. Inverse probability weighting was applied to DS1 to ac- count for positive radiograph enrichment and estimate population-level performance.
    Results: In DS1, population-adjusted areas under the receiver operating characteristic curve for pneumothorax, nodule or mass, airspace opacity, and fracture were, respectively, 0.95 (95% confidence interval [CI]: 0.91, 0.99), 0.72 (95% CI: 0.66, 0.77), 0.91 (95% CI: 0.88, 0.93), and 0.86 (95% CI: 0.79, 0.92). With ChestX-ray14, areas under the receiver operating characteristic curve were 0.94 (95% CI: 0.93, 0.96), 0.91 (95% CI: 0.89, 0.93), 0.94 (95% CI: 0.93, 0.95), and 0.81 (95% CI: 0.75, 0.86), respectively.
    Chest Radiograph Interpretation with Deep Learning Models: Assessment with Radiologist-adjudicated Reference Standards and Population-adjusted Evaluation
    Anna Majkowska et al.
    Radiology 2019; 00:1–11 • https://doi.org/10.1148/radiol.2019191293
  • Materials and Methods: Deep learning models were developed to detect four findings (pneumothorax, opacity, nodule or mass, and fracture) on frontal chest radiographs. This retrospective study used two data sets. Data set 1 (DS1) consisted of 759 611 images from a multicity hospital network and ChestX-ray14 is a publicly available data set with 112 120 images. Natural language process- ing and expert review of a subset of images provided labels for 657 954 training images. Test sets consisted of 1818 and 1962 images from DS1 and ChestX-ray14, respectively. Reference standards were defined by radiologist-adjudicated image review. Performance was evaluated by area under the receiver operating characteristic curve analysis, sensitivity, specificity, and positive predictive value. Four radiologists reviewed test set images for performance comparison. Inverse probability weighting was applied to DS1 to ac- count for positive radiograph enrichment and estimate population-level performance.
    Chest Radiograph Interpretation with Deep Learning Models: Assessment with Radiologist-adjudicated Reference Standards and Population-adjusted Evaluation
    Anna Majkowska et al.
    Radiology 2019; 00:1–11 • https://doi.org/10.1148/radiol.2019191293
  • “Deep learning models achieved parity to chest radiography interpretations from board-certified radiologists for the detection of pneumothorax, nodule or mass, airspace opacity, and fracture on a diverse multicenter chest radiography data set (areas under the receiver operative characteristic curve, 0.95, 0.72, 0.91, and 0.86 respectively)."
    Chest Radiograph Interpretation with Deep Learning Models: Assessment with Radiologist-adjudicated Reference Standards and Population-adjusted Evaluation
    Anna Majkowska et al.
    Radiology 2019; 00:1–11 • https://doi.org/10.1148/radiol.2019191293

  • Chest Radiograph Interpretation with Deep Learning Models: Assessment with Radiologist-adjudicated Reference Standards and Population-adjusted Evaluation
    Anna Majkowska et al.
    Radiology 2019; 00:1–11 • https://doi.org/10.1148/radiol.2019191293
  • “In conclusion, we developed and evaluated clinically relevant artificial intelligence models for chest radiograph interpretation that performed similar to radiologists by using a diverse set of images. The population-adjusted performance analyses reported here along with the release of adjudicated labels for the publicly available ChestX-ray14 images can provide a useful resource to facilitate the continued development of clinically useful artificial intelligence models for chest radiographs.”
    Chest Radiograph Interpretation with Deep Learning Models: Assessment with Radiologist-adjudicated Reference Standards and Population-adjusted Evaluation
    Anna Majkowska et al.
    Radiology 2019; 00:1–11 • https://doi.org/10.1148/radiol.2019191293 
  • Pneumothorax Detection
  • GE Critical Care Suite
    - Helps radiologists prioritize critical cases with a suspected pneumothorax – a type of collapsed lung – by immediately flagging critical cases to radiologists for triage, which could drastically cut the average review time from up to eight hours
    - Critical Care Suite’s overall Area Under the Curve (AUC) for detecting a pneumothorax is 0.96. Large PTXs are detected with extremely high accuracy (AUC = 0.99). Small PTXs are detected with high accuracy (AUC = 0.94). GE Healthcare 510k K183182.
  • GE Critical Care Suite
    A prioritized “STAT” X-ray can sit waiting for up to eight hours for a radiologist’s review1. However, when a patient is scanned on a device with Critical Care Suite, the system automatically analyzes the images by simultaneously searching for a pneumothorax. If a pneumothorax is suspected, an alert – along with the original chest X-ray – is sent directly to the radiologist for review via picture archiving and communication systems (PACS). The technologist also receives a subsequent on-device notification[1] to give awareness of the prioritized cases. Quality-focused AI algorithms simultaneously analyze and flag protocol and field of view errors as well as auto rotate the images on-device. 
  • “Rapid technological advancements in artificial intelligence (AI) methods have fueled explosive growth in decision tools being marketed by a rapidly growing number of companies. AI developments are being driven largely by computer scientists, informaticians, engineers, and businesspeople, with much less direct participation by radiologists. Participation by radiologists in AI is largely restricted to educational efforts to familiarize them with the tools and promising results, but techniques to help them decide which AI tools should be used in their practices and to how to quantify their value are not being addressed. This article focuses on the role of radiologists in imaging AI and suggests specific ways they can be engaged by (1) considering the clinical need for AI tools in specific clinical use cases, (2) undertaking formal evaluation of AI tools they are considering adopting in their practices, and (3) maintaining their expertise and guarding against the pitfalls of overreliance on technology.”
    Artificial Intelligence in Imaging: The Radiologist’s Role
    Daniel L. Rubin
    J Am Coll Radiol 2019;16:1309-1317.
  • Conclusion: Compared with Coronary Artery Disease Reporting and Data System and other scores, machine learning methods better discriminated patients who subsequently experienced an adverse event from those who did not.
  • Background: Coronary CT angiography contains prognostic information but the best method to extract these data remains unknown.
    Purpose: To use machine learning to develop a model of vessel features to discriminate between patients with and without subsequent death or cardiovascular events. Performance was compared with that of conventional scores.
    Conclusion: Compared with Coronary Artery Disease Reporting and Data System and other scores, machine learning methods better discriminated patients who subsequently experienced an adverse event from those who did not.
    Scoring of Coronary Artery Disease Characteristics on Coronary CT Angiograms by Using Machine Learning
    Johnson KM et al.
    Radiology 2019; 00:1–9
  • Key Points
    * For prediction of all-cause mortality on the basis of coronary CT angiography, the area under the receiver operating characteristic curve (AUC) for a machine learning score was higher than for Coronary Artery Disease Reporting and Data System (CAD- RADS; 0.77 vs 0.72, respectively; P , .001).
    * For prediction of coronary artery deaths on the basis of coronary CT angiography, the AUC was higher for a machine learning score than for CAD-RADS (0.85 vs 0.79, respectively; P , .001).
    * When deciding whether to start statins, a machine learning score ensures 93% of patients with events will be administered the drug; if CAD-RADS is used instead, only 69% will be treated.
    Scoring of Coronary Artery Disease Characteristics on Coronary CT Angiograms by Using Machine Learning
    Johnson KM et al.
    Radiology 2019; 00:1–9
  • “In conclusion, machine learning can improve the use of vessel features to discriminate between patients who will have an event and those who will not.”
    Scoring of Coronary Artery Disease Characteristics on Coronary CT Angiograms by Using Machine Learning
    Johnson KM et al.
    Radiology 2019; 00:1–9

  • Pneumothorax Detection

  • PE Detection

  • PE Detection
  • OBJECTIVES To develop a deep learning–based algorithm that can classify normal and abnormal results from chest radiographs with major thoracic diseases including pulmonary malignant neoplasm, active tuberculosis, pneumonia, and pneumothorax and to validate the algorithm’s performance using independent data sets.
    CONCLUSIONS AND RELEVANCE The algorithm consistently outperformed physicians, including thoracic radiologists, in the discrimination of chest radiographs with major thoracic diseases, demonstrating its potential to improve the quality and efficiency of clinical practice.
    Development and Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs
    Eui Jin Hwang et al.
    JAMA Network Open. 2019;2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
  • RESULTS The algorithm demonstrated a median (range) area under the curve of 0.979 (0.973-1.000) for image-wise classification and 0.972 (0.923-0.985) for lesion-wise localization; the algorithm demonstrated significantly higher performance than all 3 physician groups in both image-wise classification (0.983 vs 0.814-0.932; all P < .005) and lesion-wise localization (0.985 vs 0.781-0.907; all P < .001). Significant improvements in both image-wise classification (0.814-0.932 to 0.904-0.958; all P < .005) and lesion-wise localization (0.781-0.907 to 0.873-0.938; all P < .001) were observed in all 3 physician groups with assistance of the algorithm.
    Development and Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs
    Eui Jin Hwang et al.
    JAMA Network Open. 2019;2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
  • Key Points
    Question Can a deep learning–based algorithm accurately discriminate abnormal chest radiograph results showing major thoracic diseases from normal chest radiograph results?
    Findings In this diagnostic study of 54 221 chest radiographs with normal findings and 35 613 with abnormal findings, the deep learning–based algorithm for discrimination of chest radiographs with pulmonary malignant neoplasms, active tuberculosis, pneumonia, or pneumothorax demonstrated excellent and consistent performance throughout 5 independent data sets. The algorithm outperformed physicians, including radiologists, and enhanced physician performance when used as a second reader.
    Meaning A deep learning–based algorithm may help improve diagnostic accuracy in reading chest radiographs and assist in prioritizing chest radiographs, thereby increasing workflow efficacy.
    Development and Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs
    Eui Jin Hwang et al.
    JAMA Network Open. 2019;2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
  • Findings In this diagnostic study of 54 221 chest radiographs with normal findings and 35 613 with abnormal findings, the deep learning–based algorithm for discrimination of chest radiographs with pulmonary malignant neoplasms, active tuberculosis, pneumonia, or pneumothorax demonstrated excellent and consistent performance throughout 5 independent data sets. The algorithm outperformed physicians, including radiologists, and enhanced physician performance when used as a second reader.
    Development and Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs
    Eui Jin Hwang et al.
    JAMA Network Open. 2019;2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
  • The strengths of our study can be summarized as follows. First, the development data set underwent extensive data curation by radiologists. It has been shown that the performance of deep learning–based algorithms depends not only on the quantity of the training data set, but also on the quality of the data labels. As for CRs, several open-source data sets are currently available; however, those data sets remain suboptimal for the development of deep learning–based algorithms because they are weakly labeled by radiologic reports or lack localization information. In contrast, in the present study, we initially collected data from the radiology reports and clinical diagnosis; then experienced board-certified radiologists meticulously reviewed all of the collected CRs. Furthermore, annotation of the exact location of each abnormal finding was done in 35.6% of CRs with abnormal results, which we believe led to the excellent performance of our DLAD.
    Development and Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs
    Eui Jin Hwang et al.
    JAMA Network Open. 2019;2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
  • “In contrast, in the present study, we initially collected data from the radiology reports and clinical diagnosis; then experienced board-certified radiologists meticulously reviewed all of the collected CRs. Furthermore, annotation of the exact location of each abnormal finding was done in 35.6% of CRs with abnormal results, which we believe led to the excellent performance of our DLAD.”
    Development and Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs
    Eui Jin Hwang et al.
    JAMA Network Open. 2019;2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
  • Third, we compared the performance of our DLAD with the performance of physicians with various levels of experience. The stand-alone performance of a CAD system can be influenced by the difficulty of the test data sets and can be exaggerated in easy test data sets. However, observer performance tests may provide a more objective measure of performance by comparing the performance between the CAD system and physicians. Impressively, the DLAD demonstrated significantly higher performance both in image-wise classification and lesion-wise localization than all physician groups, even the thoracic radiologist group.
    Development and Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs
    Eui Jin Hwang et al.
    JAMA Network Open. 2019;2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
  • “The high performance of the DLAD in classification of CRs with normal and abnormal findings indicative of major thoracic diseases, outperforming even thoracic radiologists, suggests its potential for stand-alone use in select clinical situations. It may also help improve the clinical workflow by prioritizing CRs with suspicious abnormal findings requiring prompt diagnosis and management. It can also improve radiologists’ work efficiency, which would partially alleviate the heavy workload burden that radiologists face today and improve patients’ turnaround time. Furthermore, the improved performance of physicians with the assistance of the DLAD indicates the potential of our DLAD as a second reader. The DLAD can contribute to reducing perceptual error of interpreting physicians by alerting them to the possibility of major thoracic diseases and visualizing the location of the abnormality. In particular, the more obvious increment of performance in less-experienced physicians suggests that our DLAD can help improve the quality of CR interpretations in situations in which expert thoracic radiologists may not be available.”
    Development and Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs
    Eui Jin Hwang et al.
    JAMA Network Open. 2019;2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
  • “The high performance of the DLAD in classification of CRs with normal and abnormal findings indicative of major thoracic diseases, outperforming even thoracic radiologists, suggests its potential for stand-alone use in select clinical situations. It may also help improve the clinical workflow by prioritizing CRs with suspicious abnormal findings requiring prompt diagnosis and management. It can also improve radiologists’ work efficiency, which would partially alleviate the heavy workload burden that radiologists face today and improve patients’ turnaround time.”
    Development and Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs
    Eui Jin Hwang et al.
    JAMA Network Open. 2019;2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
  • “We developed a DLAD algorithm that can classify CRs with normal and abnormal findings indicating major thoracic diseases with consistently high performance, outperforming even radiologists, which may improve the quality and efficiency of the current clinical workflow.”
    Development and Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs
    Eui Jin Hwang et al.
    JAMA Network Open. 2019;2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
  • OBJECTIVE. Diagnostic imaging has traditionally relied on a limited set of qualitative imaging characteristics for the diagnosis and management of lung cancer. Radiomics—the extraction and analysis of quantitative features from imaging—can identify additional imaging characteristics that cannot be seen by the eye. These features can potentially be used to diagnose cancer, identify mutations, and predict prognosis in an accurate and noninvasive fash- ion. This article provides insights about trends in radiomics of lung cancer and challenges to widespread adoption.
    CONCLUSION. Radiomic studies are currently limited to a small number of cancer types. Its application across various centers are nonstandardized, leading to difficulties in comparing and generalizing results. The tools available to apply radiomics are specialized and limited in scope, blunting widespread use and clinical integration in the general population. Increasing the number of multicenter studies and consortiums and inclusion of radiomics in resident training will bring more attention and clarity to the growing field of radiomics.
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • OBJECTIVE. Diagnostic imaging has traditionally relied on a limited set of qualitative imaging characteristics for the diagnosis and management of lung cancer. Radiomics—the extraction and analysis of quantitative features from imaging—can identify additional imaging characteristics that cannot be seen by the eye. These features can potentially be used to diagnose cancer, identify mutations, and predict prognosis in an accurate and noninvasive fashion. This article provides insights about trends in radiomics of lung cancer and challenges to widespread adoption.
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • CONCLUSION. Radiomic studies are currently limited to a small number of cancer types. Its application across various centers are nonstandardized, leading to difficulties in comparing and generalizing results. The tools available to apply radiomics are specialized and limited in scope, blunting widespread use and clinical integration in the general population. Increasing the number of multicenter studies and consortiums and inclusion of radiomics in resident training will bring more attention and clarity to the growing field of radiomics.
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • Radiomics is defined as the quantification of the phenotypic features of a lesion from medical imaging (i.e., CT, PET, MRI, ultrasound). These features include lesion shape, volume, texture, attenuation, and many more that are not readily apparent or are too numerous for an individual radiologist to assess visually or qualitatively. In other words, radiomics is the process of creating a set of organized data based on the physical properties of an object of interest.
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • Regardless of lesion histology and location, the workflow in radiomics remains similar. Images of the lesion, typically CT images, are acquired. The images are segmented to define the outer limits of a given lesion. Specific phenotypic features are then selected, extracted from the images, and recorded. Finally, data analysis is performed on the recorded data. Image features can be extracted and analyzed in either 2D or 3D: 2D refers to segmentation and analysis of radiomic metrics on a single-slice image, whereas 3D refers to the same process across the entire volume of a tumor (many slices).
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • Image features can be extracted and analyzed in either 2D or 3D: 2D refers to segmentation and analysis of radiomic metrics on a single-slice image, whereas 3D refers to the same process across the entire volume of a tumor (many slices). Therefore, 3D radiomics, by definition, requires analysis of the entire volume of tumor. In general, feature extraction and analysis are easier and faster in 2D than in 3D, but 3D may theoretically carry more information. Two-dimensional radiomics is used more commonly, but 3D radiomics is appealing with regard to analyzing intratumoral heterogeneity in cases in which different parts of a tumor may exhibit differing histologic subtypes.
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • “Segmentation of a lesion is the act of extracting or isolating a lesion of interest (e.g., lung nodule) from the surrounding normal lung. Features are then extracted and are further analyzed directly from the segmented lesion. This can be thought of in distinction to deep learning, where an algorithm must learn to automatically extract features from an unsegmented image.”
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • Lesion segmentation can be done either manually or in an automated fashion. Manual segmentation—that is, segmentation performed by a trained observer who manually outlines the lesion of interest—is time-consuming and is more prone to interobserver variability and subjectivity than semiautomated and fully automated segmentation. Manual segmentation is important when accuracy of the tumor outline (i.e., lesion shape and size) is needed.
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • Shape is one of the feature categories understood as both semantic and agnostic. It is a category of features that includes diameter measurements (e.g., minimum, maximum) and their derivatives including volume, ratio of diameters, surface-to-volume ratio, and compactness. Diameter measurements and their derivatives are among the most commonly assessed features. Semantic descriptions such as round, oval, and spiculated are understood agnostically by a varied lexicon that attempts to determine how irregular the object is. In the shape category, tumor volume has shown the most promise in predicting treatment response.
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • “Texture in radiomics broadly refers to the variation in the gray-scale intensities of adjacent pixels or voxels in an image. Depending on the technique involved, texture features are categorized into first, second, and higher-order statistical measures. The first-order statistical measures are composed of features that account for variations in gray-scale intensities without accounting for their spatial location or orientation on the image. For example, a histogram of pixel or voxel intensities, which is a visual representation of the distribution of gray-scale intensity values on an image, is the most common technique to derive the first-order texture measures.”
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • Second-order texture metrics encompass hundreds of features derived by evaluating the relationship of adjacent pixels in an ROI or across the entire lesion. These metrics account for both the intensity of a gray-scale value and its location or orientation in the image. CT images are formed from a 3D matrix of data that is used to determine the amount of gray-level color to display for a given image pixel. Texture or heterogeneity refers to analysis of adjacent pixels of gray color to determine the relationship between them; if there are wide variances in the amount of gray color in a given area, then a lesion is considered more heterogeneous or to have a coarse texture.
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • “Texture has shown the most promise in predicting the presence of malignancy and prognosis. Local binary patterns (LBPs) and gray-level co-occurrence matrices (GLCMs) are most often used in this. However, evaluations of nodule heterogeneity or texture are not limited to LBPs or GLCMs. Numerous alternative methods that attempt to extract patterns from an image via a series of mathematic transformations or filters applied to the image, including Laws’ energy descriptors, fractal analysis, and wavelet analysis, are being increasing applied. This latter group of texture metrics includes higher-order statistical measures. Texture analysis has practical applications; for example, Parmar and colleagues showed that texture features in lung cancer were significantly associated with tumor stage and patient survival.”
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504

  • “Segmentation and feature recognition currently rely on the initial identification of a nodule by a radiologist. Thus, the near-term and medium-term role of radiomics is likely to be as a support tool in which radiomics is integrated with traditional radiologic and invasive histologic information. We should note that many prior studies achieved highest accuracy when radiomic data were viewed in light of genetic and clinical information.”
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • “Most importantly, the study of radiomics must be drastically expanded to account for the numerous clinical and radiologic presentations of lung cancer. Radiomics is predicated on creating tools to more accurately diagnose lung cancer and determine prognosis of patients with lung cancer in a noninvasive fashion. However, the tools available to practice radiomics are specialized and limited in scope, blunting wide-spread use and clinical integration in the general population. Looking forward, we believe that increasing the number of multicenter studies and consortiums and inclusion of radiomics in resident training will bring more attention to the growing field of radiomics.”
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • “ Other challenges for radiomics include advancing interinstitutional standards for image acquisition and reconstruction parameters and the development of a unified lexicon. Radiomic data are affected by different image acquisition and reconstruction parameters (e.g., contrast timing, slice thickness, reconstruction algorithm, tube voltage, tube current, and so on) that can affect the reproducibility of radiomic features . Many radiomic studies have relied on a heterogeneous dataset of imaging using a mixture of these parameters. Standardized imaging parameters, including consistent contrast dose, timing, and radiation dose levels, will likely need to be implemented for radiomic studies.”
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • Furthermore, radiomics can be performed in 2D or 3D. Two-dimensional radiomics is applied to a single image slice, and the resulting radiomic features can vary from slice to slice. Three-dimensional radiomics is applied to the entire volume of a tumor. The potential differences between these two fundamentally different approaches require further evaluation. In addition, radiomics is a multidisciplinary field with experts from different backgrounds who approach radiomics in different ways. These experts often collaborate and have to understand and incorporate the methods and rationale of sometimes unfamiliar disciplines. For example, computer science researchers may have limited knowledge and experience with medical image acquisition and reconstruction. A unified lexicon will be necessary to maintain consistency, especially for researchers who have limited experience with medical imaging.”
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • Accurate identification and localization of abnormalities from radiology images play an integral part in clinical diagnosis and treatment planning. Building a highly accurate prediction model for these tasks usually requires a large number of images manually annotated with labels and finding sites of abnormalities. In reality, however, such annotated data are expensive to acquire, especially the ones with location annotations. We need methods that can work well with only a small amount of location annotations. To address this challenge, we present a unified approach that simultaneously performs disease identification and localization through the same underlying model for all images. We demonstrate that our approach can effectively leverage both class information as well as limited location annotation, and significantly outperforms the comparative reference baseline in both classification and localization tasks. Thoracic Disease Identification and Localization with Limited Supervision Zhe Li et al arVIX March 2018 (in press)
  • “We propose a unified model that jointly models disease identification and localization with limited localization annotation data. This is achieved through the same underlying prediction model for both tasks. Quantitative and qualitative results demonstrate that our method significantly outperforms the state-of-the-art algorithm”
    Thoracic Disease Identification and Localization with Limited Supervision
    Zhe Li et al arVIX March 2018 (in press)

  • Thoracic Disease Identification and Localization with Limited Supervision
    Zhe Li et al
    arVIX March 2018 (in press)

  • Thoracic Disease Identification and Localization with Limited Supervision
    Zhe Li et al
    arVIX March 2018 (in press)
  • “To address these two issues, the authors made use of a large data set of 103489 chest radiographs obtained between 2007 and 2016 in 46712 patients. Only 5232 patients with 7390 radiographs had a BNP test value avail- able. This data set with BNP data was termed “labeled,” and the other data set without BNP data was termed “unlabeled.” In the labeled data set, BNP level was dichotomized at 100 ng/L, above which CHF was defined as present. The labeled data set was divided into a training data set (80% of the data) and a test data set (20% of the data).”
    Using a Deep Learning Network to Diagnose Congestive Heart Failure
    Ngo LH
    Radiology 2019; 00:1–2 •
    https://doi.org/10.1148/radiol.2018182341
  • Nevertheless, clearly the work of Seah et al is highly innovative and has wide applications in many different areas in medical imaging. The concept of GVR is in fact similar to the idea of counterfactuals used in causal inference studies. A GVR-generated deep learning neural network system (as nicely implemented in this study) would definitely improve over time as more labeled images, finer-resolution images, and improved machine learning algorithms become available. One can easily imagine having this system as an additional tool to assist radiologists in delivering better diagnostic information to their patients.
    Using a Deep Learning Network to Diagnose Congestive Heart Failure
    Ngo LH
    Radiology 2019; 00:1–2 •
    https://doi.org/10.1148/radiol.2018182341
  • BriefCase is a radiological computer aided triage and notification software indicated for use in the analysis of non-enhanced head CT images. The device is intended to assist hospital networks and trained radiologists in workflow triage by flagging and communication of suspected positive findings of pathologies in head CT images, namely Intracranial Hemorrhage (ICH) .
    BriefCase uses an artificial intelligence algorithm to analyze images and highlight cases with detected ICH on a standalone desktop application in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected ICH findings. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.
    The results of BriefCase are intended to be used in conjunction with other patient information and based on professional judgment to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.
  • “BriefCase uses an artificial intelligence algorithm to analyze images and highlight cases with detected ICH on a standalone desktop application in parallel to the ongoing care image interpretation. The user is presented standard of with notifications for cases with suspected ICH findings. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device. “
  • AI and Pathology in Lung Cancer
  • Visual inspection of histopathology slides is one of the main methods used by pathologists to assess the stage, type and subtype of lung tumors. Adenocarcinoma (LUAD) and squamous cell carcinoma (LUSC) are the most prevalent subtypes of lung cancer, and their distinction requires visual inspection by an experienced pathologist. In this study, we trained a deep convolutional neural network (inception v3) on whole-slide images obtained from The Cancer Genome Atlas to accurately and automatically classify them into LUAD, LUSC or normal lung tissue. The performance of our method is comparable to that of pathologists, with an average area under the curve (AUC) of 0.97. Our model was validated on independent datasets of frozen tissues, formalin-fixed paraffin-embedded tissues and biopsies. Furthermore, we trained the network to predict the ten most commonly mutated genes in LUAD. We found that six of them—STK11, EGFR, FAT1, SETBP1, KRAS and TP53—can be predicted from pathology images, with AUCs from 0.733 to 0.856 as measured on a held-out population. These findings suggest that deep-learning models can assist pathologists in the detection of cancer subtype or gene mutations.
  • In this study, we trained a deep convolutional neural network (inception v3) on whole-slide images obtained from The Cancer Genome Atlas to accurately and automatically classify them into LUAD, LUSC or normal lung tissue. The performance of our method is comparable to that of pathologists, with an average area under the curve (AUC) of 0.97. Our model was validated on independent datasets of frozen tissues, formalin-fixed paraffin-embedded tissues and biopsies. Furthermore, we trained the network to predict the ten most commonly mutated genes in LUAD. We found that six of them—STK11, EGFR, FAT1, SETBP1, KRAS and TP53—can be predicted from pathology images, with AUCs from 0.733 to 0.856 as measured on a held-out population. These findings suggest that deep-learning models can assist pathologists in the detection of cancer subtype or gene mutations.
  • Purpose: To develop and validate a deep learning–based automatic detection algorithm (DLAD) for malignant pulmonary nodules on chest radiographs and to compare its performance with physicians including thoracic radiologists.
    Conclusion: This deep learning–based automatic detection algorithm outperformed physicians in radiograph classification and nodule detection performance for malignant pulmonary nodules on chest radiographs, and it enhanced physicians’ performances when used as a second reader.
    Development and Validation of Deep Learning–based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs
    Ju Gang Nam et al. Radiology 2018; 00:1–11 • https://doi.org/10.1148/radiol.2018180237 
  • Materials and Methods: For this retrospective study, DLAD was developed by using 43 292 chest radiographs (normal radiograph– to–nodule radiograph ratio, 34 067:9225) in 34 676 patients (healthy-to-nodule ratio, 30 784:3892; 19 230 men [mean age, 52.8 years; age range, 18–99 years]; 15 446 women [mean age, 52.3 years; age range, 18–98 years]) obtained between 2010 and 2015, which were labeled and partially annotated by 13 board-certified radiologists, in a convolutional neural network. Radiograph classification and nodule detection performances of DLAD were validated by using one internal and four external data sets from three South Korean hospitals and one U.S. hospital. For internal and external validation, radiograph classification and nodule detection performances of DLAD were evaluated by using the area under the receiver operating characteristic curve (AUROC) and jackknife alternative free-response receiver-operating characteristic (JAFROC) figure of merit (FOM), respectively. An observer performance test involving 18 physicians, including nine board-certified radiologists, was conducted by using one of the four external validation data sets. Performances of DLAD, physicians, and physicians assisted with DLAD were evaluated and compared.
    Development and Validation of Deep Learning–based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs
    Ju Gang Nam et al.
    Radiology 2018; 00:1–11 • https://doi.org/10.1148/radiol.2018180237 
  • Results: According to one internal and four external validation data sets, radiograph classification and nodule detection performances of DLAD were a range of 0.92–0.99 (AUROC) and 0.831–0.924 (JAFROC FOM), respectively. DLAD showed a higher AUROC and JAFROC FOM at the observer performance test than 17 of 18 and 15 of 18 physicians, respectively (P , .05), and all physicians showed improved nodule detection performances with DLAD (mean JAFROC FOM improvement, 0.043; range, 0.006–0.190; P , .05).
    Development and Validation of Deep Learning–based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs
    Ju Gang Nam et al.
    Radiology 2018; 00:1–11 • https://doi.org/10.1148/radiol.2018180237 
  • Summary: Our deep learning–based automatic detection algorithm outper- formed physicians in radiograph classification and nodule detection performance for malignant pulmonary nodules on chest radiographs, and when used as a second reader, it enhanced physicians’ performances.
    Development and Validation of Deep Learning–based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs
    Ju Gang Nam et al.
    Radiology 2018; 00:1–11 • https://doi.org/10.1148/radiol.2018180237
  • Implications for Patient Care
    - Our deep learning–based automatic detection algorithm showed excellent detection performances on both a per-radiograph and per-nodule basis in one internal and four external validation data sets.
    - Our deep learning–based automatic detection algorithm demonstrated higher performance than the thoracic radiologist group.
    - When accompanied by our deep learning–based automatic detection algorithm, all physicians improved their nodule detection performances.

  • “The process of achieving value in terms of medical decision support does not remove the clinician or radiologist, but instead, provides easier access to information that might otherwise be inaccessible, inefficient, or difficult to integrate in real-time for the consulting physician. When this information is distilled in a way available to the radiologist, it becomes knowledge that can positively impact the clinician’s judgment in a personalized way in real-time.”


    Reinventing Radiology: Big Data and the Future of Medical Imaging 
Morris MA et al.
J Thorac Imaging 2018;33:4–16
  • “Many tools have been developed to risk stratify patients into categories of pretest probability for CAD by generalizing patients into low-risk, medium-risk, and high- risk categories. Examples such as the Diamond and Forrester method, the Duke Clinical Score, and the Framingham Risk Score incorporate prior clinical history of cardiac events, certain characteristics of the chest pain, family history, medical history, age, sex, and results of a lipid panel. Imaging findings have been used in this type of risk stratification as well, with coronary calcium scoring.”


    Reinventing Radiology: Big Data and the Future of Medical Imaging 
Morris MA et al.
J Thorac Imaging 2018;33:4–16
  • “Importantly for radiologists, machine learning algorithms can help address many problems in current-day radiology practices that do not involve image interpretation. Although much of the attention in the machine learning space has focused on the ability of machines to classify image findings, there are many other useful applications of machine learning that will improve efficiency and utilization of radiology practices today. Moreover, we may see a world where a symbiosis of subspecialty experts and machines lead to better care than could be provided by either one alone. Those practices that implement these technologies today are likely to better position themselves for the future.” 


    Machine Learning in Radiology: 
Applications Beyond Image Interpretation 
Paras Lakhani et al.
J Am Coll Radiol (in press)

Privacy Policy

Copyright © 2024 The Johns Hopkins University, The Johns Hopkins Hospital, and The Johns Hopkins Health System Corporation. All rights reserved.