Deep Learning Exhibits ❯ RSNA 2018

Application of Deep Learning to Pancreatic Imaging – The Radiologists’ Perspective

 

 

Application of Deep Learning to Pancreatic Imaging – The Radiologists’ Perspective

Linda C. Chu, Seyoun Park, Satomi Kawamoto, Daniel F. Fouladi, Shahab Shayesteh, Karen M. Horton, Alan L. Yuille, Ralph H. Hruban, Bert Vogelstein, Kenneth W. Kinzler and Elliot K. Fishman

The Russel H. Morgan Department of Radiology and Radiological Science, The Department of Pathology, The Department of Cancer Research, and the Department of Computer Science, Johns Hopkins University, Baltimore

 

Disclosure

  • Research support from The Lustgarten Foundation:
    • Linda C. Chu
    • Seyoun Park
    • Satomi Kawamoto
    • Daniel F. Fouladi
    • Shahab Shayesteh
    • Alan L. Yuille
    • Bert Vogelstein
    • Elliot K. Fishman

 

Learning Objectives

  • Illustrate practical aspects of applying deep learning to pancreatic imaging
  • Review challenges in application of deep learning in our current clinical practice
  • Discuss future directions of deep learning in medical imaging

 

Introduction

  • Pancreatic ductal adenocarcinoma (PDAC) is the 3rd most common cause of cancer death in the US, with dismal five-year survival of 8.2%
  • CT is the most commonly used imaging modality for the initial evaluation of suspected PDAC
  • Sensitivity for PDAC detection ranges from 76-96% on CT
  • Accuracy of PDAC detection critically depends on imaging technique and experience of the radiologists
  • Early signs of PDAC can be subtle, and can be seen retrospectively up to 34 months before diagnosis
https://seer.cancer.gov. Chu LC et al. Cancer J. 2017;23(6):333-342. Gonoi W et al. Eur Radiol. 2017;27(12):4941-4950.

 

Deep Learning

  • Deep learning, a form of artificial intelligence, has the potential to revolutionize the practice of radiology through improved disease detection and prognostication
  • Uses training data and multiple layers of equations to develop a mathematical model that fits the data
  • Application of deep learning to abdominal imaging is relatively uncharted territory with many potential approaches
Erickson BJ et al. RadioGraphics 2017;37:505-15. Kohli M et al. AJR 2017;208:754-60.

 



  • A single layer network adjusts the weights of the nodes such that the model can use input (x) to produce output (y)
  • In a deep neural network, there are many hidden layers of interconnected nodes where the output of one layer becomes the input of the next layer
  • The deep convolutional neural network can use medical imaging data as input to generate segmentation, classification, and prognostication as output

 

Our Experience – The FELIX Project

  • Mission: To improve detection of pancreatic ductal adenocarcinoma by applying deep learning algorithms to CT images
  • The name FELIX was inspired by the “Felix Felicis” aka “Liquid Luck” potion in Harry Potter
  • We hope our ambitious multidisciplinary collaboration will be successful – with a little bit of luck!

 

Multidisciplinary Team Approach

Multidisciplinary Team Approach

 

The FELIX Project

  • We will discuss our approach, experience, and practical lessons:
    1. Supervised learning
    2. High quality input data
    3. Learning normal anatomy
    4. Recognizing tumor
    5. Troubleshooting

 

#1: Supervised Learning

  • Deep learning can be performed with supervised learning or unsupervised learning
  • We chose the supervised learning approach since few algorithms currently exist for abdominal imaging
Supervised Learning

Chartrand G et al. RadioGraphics 2017;37:2113-2131.

 

Overall Workflow

Overall Workflow

 

#2: High Quality Input Data

  • Deep learning requires large amount of high quality input data
  • Our input CT data:
    • 64-slice or dual-source multidetector CT exams
    • Dual-phase (arterial and venous)
    • Reconstructed into 0.75 mm thick slices
  • Abdominal organs and vasculature were manually segmented using commercially available segmentation software (VelocityTM, Varian Medical Systems Inc.)
High Quality Input Data High Quality Input Data

Kawamoto S et al. [Submitted]

 

#2: High Quality Input Data

  • Time consuming and labor intensive process to ensure accurate segmentation of the ground truth:
    • 4 full time and 1 part time trained researchers dedicated to segmentation
    • Contours were verified by 3 board certified radiologists with 5-30 years experience in CT imaging
    • Since each abdominal organ and major abdominal vasculature were segmented on 0.75 mm thin images, it took an average of 4 hours to perform each case
  • To date, we have segmented:
    • 575 normal controls without known pancreatic disease
    • 750 PDAC cases
Kawamoto S et al. [Submitted]

 

#3: Learning Normal Anatomy

  • In order to teach the computer to detect pancreatic cancer, we must first teach it to recognize normal anatomy
  • We have developed 2D and 3D deep learning algorithms for pancreas segmentation
Learning Normal Anatomy

Zhou Y et al. arXiv:1612.08230. MICCAI, 2017.

 

FELIX segmentation algorithm outperformed state-of-the-art algorithms on publicly available NIH dataset. It achieved even higher segmentation accuracy on the Johns Hopkins dataset, likely due to larger high quality dataset, thinner slices, and our experience.

Johns Hopkins Dataset

Saito A et al. Med Image Anal. 2016;28:46-65. Karasawa K et al. Med Image Anal. 2017;39:18-28. Roth H et al. arXiv:1606.07830. MICCAI, 2016. Roth H et al. arXiv:1702.00045. 2017. Cai J et al. arXiv:1707.04912. MICCAI, 2017. Yu Q et al. arXiv:1709.04518. CVPR, 2018.

 

#3: Learning Normal Anatomy

  • Segmentation of pancreas alone obviously requires less time and effort than segmentation of all the abdominal organs
  • However, pancreatic cancer may affect other structures:
    • CBD/Pancreatic duct dilatation
    • Local tumor/vascular invasion
    • Metastatic disease
  • Knowledge of the location and normal appearance of neighboring organs should help detect pancreatic pathology, and it should help eliminate any false positives where adjacent organs can be mistaken for tumor
  • Therefore, we chose to segment all abdominal organs

 

Multi-Organ Segmentation

We have developed algorithms that can achieve >85% accuracy in segmentation of major abdominal organs

Multi-Organ Segmentation

Wang Y et al. arXiv:1804.08414.

 

Multi-Organ Segmentation

Multi-organ segmentation (n = 575):

Multi-Organ Segmentation

Wang Y et al. arXiv:1804.08414. Kawamoto S et al. [Submitted]

 

Multi-Organ Segmentation

  • We achieved our first task of teaching the computer normal anatomy
  • Next, we focused our efforts on tumor detection
Multi-Organ Segmentation

Wang Y et al. arXiv:1804.08414.

 

#4: Recognizing Tumor

  • Radiologists use different visual cues to detect pancreatic cancer:
    • Abnormal shape
    • Change in attenuation and texture
    • Abrupt transition of dilated pancreatic duct
    • Dilated common bile duct
    • Peripancreatic vascular involvement
  • We used this expert knowledge to help design the deep network to recognize the tumor

 

#4: Recognizing Tumor

We have developed a number of deep learning algorithms that can recognize pancreatic ductal adenocarcinoma based on abnormal shape and/or texture

Recognizing Tumor

Liu F et al. arXiv:1804.10684. Zhu Z et al. arXiv:1807.02941.

 

#4: Recognizing Tumor

  • Preliminary results showed that the algorithms can detect PDAC with > 90% sensitivity and >90% specificity with a range of sizes and appearances of PDAC
  • 3 cases of small PDAC that were correctly detected by the deep network are shown below:
Recognizing Tumor

Liu F et al. arXiv:1804.10684. Zhu Z et al. arXiv:1807.02941

 

#5: Troubleshooting

  • We hold weekly lab meetings to review our progress
  • Radiologists review false positive and false negative cases to help identify any system errors
  • This ongoing feedback provides further refinement of the algorithm to solve these system errors
  • Mutual feedback also leads radiologists to enrich the collection with difficult cases to challenge the network

 

#5: Troubleshooting

Troubleshooting

 

Single-Phase vs. Dual-Phase

  • Most existing algorithms only use input data from a single phase (i.e. venous phase)
  • We chose to analyze dual-phase data to maximize our potential to detect pancreatic pathology
Single-Phase vs. Dual-Phase

 

Single-Phase vs. Dual-Phase

Combining arterial and venous phase can improve segmentation accuracy of the pancreatic tumor by improving segmentation accuracy of adjacent organs and vasculature

Single-Phase vs. Dual-Phase

 

False Negative – Small or Exophytic Tumors

  • Small or exophytic tumors are challenging for both radiologists and deep learning algorithms
  • Secondary signs of CBD and pancreatic duct obstruction are often absent
False Negative – Small or Exophytic Tumors

 

False Negative – Small or Exophytic Tumors

  • One solution is to enrich the training dataset with these challenging cases
  • Detection of these small subtle early stage tumors will have the greatest impact in improving patient prognosis
False Negative – Small or Exophytic Tumors

 

False Negative – Isoattenuating Tumors

  • Recognition of a dilated pancreatic duct with abrupt cut-off is key in detection of isoattenuating PDAC
  • Train the deep network to recognize cases with dilated duct as suspicious
False Negative – Isoattenuating Tumors

 

False Positive – Prediction of Tumor Outside Pancreas

Application of multi-organ segmentation algorithm can correct for false positives where the predicted tumor is located at the predicted location of another organ

False Positive – Prediction of Tumor Outside Pancreas

 

False Positive – Focal Fat

  • Deep network is sensitive to changes in attenuation and texture, and can misclassify cases with focal fat as suspicious
  • Train deep network to recognize focal fat
  • From a screening perspective, it may be better for the deep network to err on being too sensitive
False Positive – Focal Fat

 

Future Directions

  • Enrich training dataset with small and challenging tumors to improve the performance of the deep network
  • Include other tumors types (e.g. PNETs, cystic neoplasms) and pancreatitis for detection and classification
  • Test our algorithm on datasets from other institutions to ensure its performance with other scanner types and scan protocols

 

Future Directions

  • In the near future, we anticipate deep learning can achieve diagnostic performance that can be comparable to radiologists
  • We foresee deep learning will serve as a “second-reader” analogous to how CAD is used in mammography to reduces misses
  • This can lead to earlier detection of pancreatic cancer, which will have a significant impact in patient prognosis

 

Conclusion

  • We presented our experience in applying deep learning in the detection of pancreatic cancer
  • Multidisciplinary approach is key to success as it brings expert knowledge from radiology, computer science, pathology, and cancer research
  • Large scale high quality dataset is essential
  • The algorithm needs to incorporate cues from other abdominal organs to optimize its performance

 

References

  • Cai J et al. arXiv:1707.04912. MICCAI, 2017.
  • Chartrand G et al. RadioGraphics 2017;37:2113-2131.
  • Chu LC et al. Cancer J. 2017;23(6):333-342.
  • Erickson BJ et al. RadioGraphics 2017;37:505-15.
  • Gonoi W et al. Eur Radiol. 2017;27(12):4941-4950.
  • Karasawa K et al. Med Image Anal. 2017;39:18-28.
  • Kohli M et al. AJR 2017;208:754-60.
  • Liu F et al. arXiv:1804.10684.
  • Saito A et al. Med Image Anal. 2016;28:46-65.
  • Roth H et al. arXiv:1606.07830. MICCAI, 2016.
  • Roth H et al. arXiv:1702.00045. 2017.
  • Wang Y et al. arXiv:1804.08414.
  • Yu Q et al. arXiv:1709.04518. CVPR, 2018.
  • Zhou Y et al. arXiv:1612.08230. MICCAI, 2017.
  • Zhu Z et al. arXiv:1807.02941.
  • https://seer.cancer.gov.
© 2020 Elliot K. Fishman, MD, FACR
All Rights Reserved.
www.CTISUS.com
CTisus CT Scanning CTisus CT Scanning CTisus CT Scanning CTisus CT Scanning CTisus CT Scanning CTisus CT Scanning CTisus CT Scanning CTisus CT Scanning CTisus CT Scanning