google ads
Search

Everything you need to know about Computed Tomography (CT) & CT Scanning

Deep Learning: Artificial Intelligence (ai) Imaging Pearls - Educational Tools | CT Scanning | CT Imaging | CT Scan Protocols - CTisus
Imaging Pearls ❯ Deep Learning ❯ Artificial Intelligence (AI)

-- OR --

  • ”Once an algorithm is deployed into clinical practice, legal and ethical challenges must be considered. When errors are made using AI algorithms, the question arises who is responsible for the mistakes made by a computer. Is it the radiologist, the AI application itself, or the company that made the AI application responsible? This question is especially important if the algorithm has not explained the inferences in terms that can be understood by humans such as bounding boxes or saliency maps. At times radiologists may not truly understand how AI algorithms arrive at certain conclusions. If we don't understand the process behind how AI algorithms work, how can we be held solely accountable for mistakes? This “black box” problem has made many groups, including the American Medical Association, develop policies that insist developers provide transparency and explicability in algorithm development.”
    Artificial intelligence in radiology: the ecosystem essential to improving patient care
    Julie Sogania,Bibb Allen Jr,Keith Dreyer,Geraldine McGinty
    Clinical Imaging (in press)
  • ” As AI continues to evolve, healthcare as we know it will dramatically change. Radiologists have always served at the forefront in adapting new technologies in medicine, and it should be no different with the advent of the AI revolution. AI will not replace radiologists; instead those radiologists who take advantage of AI may ultimately replace those who refuse to accept it. It is crucial we build an ecosystem of key players in technology, research, radiology, and the regulatory bodies who will work together to effectively and safely integrate AI into clinical practice. As a result, adoption of this technology will expand our efficiency and decision-making capabilities, leading to earlier and better detection of disease and improved outcomes for our patients.”
    Artificial intelligence in radiology: the ecosystem essential to improving patient care
    Julie Sogania,Bibb Allen Jr,Keith Dreyer,Geraldine McGinty
    Clinical Imaging (in press)
  • “The AI-based noise reduction could improve the IQ of aorta CTA with low kV and reduced CM, which achieved the potential of radiation dose and contrast media reduction compared with conventional aorta CTA protocol.”
    Application of Artificial Intelligence–based Image Optimization for Computed Tomography Angiography of the Aorta With Low Tube Voltage and Reduced Contrast Medium Volume
    Wang, Y et al.
    Journal of Thoracic Imaging (in press)
  • Purpose: The purpose of this study was to evaluate the impact of artificial intelligence (AI)-based noise on aorta computed tomography angiography (CTA) image quality (IQ) at 80 kVp tube voltage and 40 mL contrast medium (CM)
    Results: The image noise significantly decreased while signal-to-noise ratio and contrast-to-noise ratio significantly increased in the order of group A1, B, and A2 (all P<0.05). Compared with group B, the subjective IQ score of group A1 was significantly lower (P<0.05), while that of group A2 had no significant difference (P>0.05). The effective dose and CM volume of group A were reduced by 79.18% and 50%, respectively, than that of group B.
    Application of Artificial Intelligence–based Image Optimization for Computed Tomography Angiography of the Aorta With Low Tube Voltage and Reduced Contrast Medium Volume
    Wang, Y et al.
    Journal of Thoracic Imaging (in press)
  • “This article focuses on the role of radiologists in imaging AI and suggests specific ways they can be engaged by (1) considering the clinical need for AI tools in specific clinical use cases, (2) undertaking formal evaluation of AI tools they are considering adopting in their practices, and (3) maintaining their expertise and guarding against the pitfalls of overreliance on technology.”
    Artificial Intelligence in Imaging: The Radiologist’s Role
    Daniel L. Rubin
    J Am Coll Radiol 2019;16:1309-1317.
  • “Failure of AI algorithms to generalize well to new data arises because everything that AI algorithms that are trained solely on data (deep learning) “know” is based on the data that were used to train them. If the training data do not include certain types of cases that a radiology practice may encounter (eg, different diseases, different image types, artifacts), then the algorithm may provide unexpected results. Bias in training data is a common cause of AI algorithms to fail to generalize, for example, because of differences in patient populations, types of equipment, and imaging parameters used and lack of representation of rare diseases.”
    Artificial Intelligence in Imaging: The Radiologist’s Role
    Daniel L. Rubin
    J Am Coll Radiol 2019;16:1309-1317.
  • Artificial Intelligence in Imaging: The Radiologist’s Role
    Daniel L. Rubin
    J Am Coll Radiol 2019;16:1309-1317.

  • “There are some dangers, however, of unexpected negative consequences of AI on radiology practice, even if these algorithms perform well according to metrics on local practice data as described earlier. The first negative consequence is blind acceptance of the AI output. The AI algorithms are generally expected to be used to supplement, not replace, radiologists, who are presumed to have formulated an independent judgement before considering the output from the AI algorithm.”
    Artificial Intelligence in Imaging: The Radiologist’s Role
    Daniel L. Rubin
    J Am Coll Radiol 2019;16:1309-1317.
  • ”In some cases, especially high-volume and time-pressured practices, there may be a temptation to simply accept the AI reading and not formulate an independent judgement. In that case, radiologist performance will be no better than that of the AI algorithm (of course, the same applies to showing a case to a colleague). The danger in the case of the AI algorithm, however, is that if it does not generalize well to unusual cases, it may lead radiologists astray.”
    Artificial Intelligence in Imaging: The Radiologist’s Role
    Daniel L. Rubin
    J Am Coll Radiol 2019;16:1309-1317.
  • ”Patients have concerns that AI tools could produce restricted views with wrong diagnoses, and they believe that such automated systems should remain secondary to the opinions of radiologists. It will thus be beneficial for radiologists to keep these patient perspectives in mind as well as the pitfalls of assistive technologies as AI algorithms enter the market. Finally, overreliance on technology and temptation to blindly accept AI outputs could adversely affect the training of future radiologists, who may not learn the critical observation and interpretative skills that make radiology a unique discipline.”
    Artificial Intelligence in Imaging: The Radiologist’s Role
    Daniel L. Rubin
    J Am Coll Radiol 2019;16:1309-1317.
  • TAKE-HOME POINTS
    - The pace of AI development is exploding, and the number of AI tools being marketed to radiologists is accelerating, posing challenges for radiologists to decide which tools to adopt.
    - The role of radiologists in imaging AI is to identify important clinical use cases for which these tools are needed and to evaluate their effectiveness in their clinical practice.
    - AI tools are expected to improve radiologist prac- tice, but radiologists must guard against overreliance on these technologies and the potential accompa- nying loss of clinical expertise.
    Artificial Intelligence in Imaging: The Radiologist’s Role
    Daniel L. Rubin
    J Am Coll Radiol 2019;16:1309-1317.
  • AI 2019 Reality
  • Radiology on Top But!
  • Objective: To evaluate the design characteristics of studies that evaluated the performance of artificial intelligence (AI) algorithms for the diagnostic analysis of medical images.
    Materials and Methods: PubMed MEDLINE and Embase databases were searched to identify original research articles published between January 1, 2018 and August 17, 2018 that investigated the performance of AI algorithms that analyze medical images to provide diagnostic decisions. Eligible articles were evaluated to determine 1) whether the study used external validation rather than internal validation, and in case of external validation, whether the data for validation were collected, 2) with diagnostic cohort design instead of diagnostic case-control design, 3) from multiple institutions, and 4) in a prospective manner. These are fundamental methodologic features recommended for clinical validation of AI performance in real-world practice. The studies that fulfilled the above criteria were identified. We classified the publishing journals into medical vs. non-medical journal groups. Then, the results were compared between medical and non-medical journals .
    Design Characteristics of Studies Reporting the Performance of Artificial Intelligence Algorithms for Diagnostic Analysis of Medical Images: Results from Recently Published Papers
    Dong Wook Kim et al.
    Korean J Radiol 2019;20(3):405-410
  • Results: Of 516 eligible published studies, only 6% (31 studies) performed external validation. None of the 31 studies adopted all three design features: diagnostic cohort design, the inclusion of multiple institutions, and prospective data collection for external validation. No significant difference was found between medical and non-medical journals.
    Conclusion: Nearly all of the studies published in the study period that evaluated the performance of AI algorithms for diagnostic analysis of medical images were designed as proof-of-concept technical feasibility studies and did not have the design features that are recommended for robust validation of the real-world clinical performance of AI algorithms.
    Design Characteristics of Studies Reporting the Performance of Artificial Intelligence Algorithms for Diagnostic Analysis of Medical Images: Results from Recently Published Papers
    Dong Wook Kim et al.
    Korean J Radiol 2019;20(3):405-410
  • Conclusion: Nearly all of the studies published in the study period that evaluated the performance of AI algorithms for diagnostic analysis of medical images were designed as proof-of-concept technical feasibility studies and did not have the design features that are recommended for robust validation of the real-world clinical performance of AI algorithms.
    Design Characteristics of Studies Reporting the Performance of Artificial Intelligence Algorithms for Diagnostic Analysis of Medical Images: Results from Recently Published Papers
    Dong Wook Kim et al.
    Korean J Radiol 2019;20(3):405-410
  • What if AI is Dutch Tulips? Or worse?
  • Artificial intelligence is often hailed as a great catalyst of medical innovation, a way to find cures to diseases that have confounded doctors and make health care more efficient, personalized, and accessible. But what if it turns out to be poison? Jonathan Zittrain, a Harvard Law School professor, posed that question during a conference in Boston Tuesday that examined the use of AI to accelerate the delivery of precision medicine to the masses. He used an alarming metaphor to explain his concerns: “I think of machine learning kind of as asbestos,” he said. “It turns out that it’s all over the place, even though at no point did you explicitly install it, and it has possibly some latent bad effects that you might regret later, after it’s already too hard to get it all out.”
  • "If computers continue to obey Moore's Law, doubling their speed and memory capacity every eighteen months, the result is that computers are likely to over​take humans in intelligence at some point in the next hundred years. When an artificial intelligence (AI) becomes better than humans at AI design, so that it can recursively improve itself without human help, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails. When that happens, we will need to ensure that the computers have goals aligned with ours. It's tempting to dismiss the notion of highly intelligent machines as mere science fiction, but this would be a mistake, and potentially our worst mistake ever.
    Brief Answers to the Big Questions
    Stephen Hawking
  • "For the last twenty years or so, AI has been focused on the problems surrounding the construction of intelligent agents, systems that perceive and act in a particular environment. In this context, intelligence is related to statistical and economic notions of rationality -- that is, colloquially, the ability to make good decisions, plans or inferences. As a result of this recent work, there has been a large degree of integration and cross-fertilisation among Al, machine- learning, statis​tics, control theory, neuroscience and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks, such as speech recognition, image classification, autonomous vehicles, machine transla​tion, legged locomotion and question-answering systems.
    Brief Answers to the Big Questions
    Stephen Hawking
  • “AI can augment our existing intelligence to open up advances in every area of science and society. However, it will also bring dangers. While primitive forms of artificial intelligence developed so far have proved very useful, I fear the consequences of creating something that can match or surpass humans. The concern is that AI would take off on its own and redesign itself at an ever- increasing rate. Humans, who are limited by slow biological evolution, couldn't compete and would be superseded. And in the future AI could develop a will of its own, a will that is in conflict with ours. Others believe that humans can command the rate of tech​nology for a decently long time, and that the potential of AI to solve many of the world's problems will be realised. Although I am well known as an optimist regarding the human race, I am not so sure."
    Brief Answers to the Big Questions
    Stephen Hawking
  • OBJECTIVE. Artificial intelligence (AI) neural networks rapidly convert disparate facts and data into highly predictive analytic models. Machine learning maps image-patient phenotype correlations opaque to standard statistics. Deep learning performs accurate image-derived tissue characterization and can generate virtual CT images from MRI datasets. Natural language processing reads medical literature and efficiently reconfigures years of PACS and electronic medical record information.
    CONCLUSION. AI logistics solve radiology informatics workflow pain points. Imaging professionals and companies will drive health care AI technology insertion. Data science and computer science will jointly potentiate the impact of AI applications for medical imaging.
    How Cognitive Machines Can Augment Medical Imaging
    Miller DD, Brow EW
    AJR 2019; 212:9–14
  • “AI is not a mindless black-box technology passively fixing the world’s data explosion problems; however, under varying degrees of human supervision, superfast computers can process massive datasets through convolu- tional neural networks (CNNs) of layered algorithms to produce predictive models that would defy standard statistical analyses.”
    How Cognitive Machines Can Augment Medical Imaging
    Miller DD, Brow EW
    AJR 2019; 212:9–14
  • "Generative adversarial networks (GANs), first described in 2014, are a computing framework for explaining how deep CNNs can make mistakes in correctly predicting images of objects, speech patterns, and natural language symbols from rich datasets. Successful deep CNNs apply discriminative models that back-propagate derivatives and apply dropout algorithms to estimate the probability that an output sample has been derived from training data.”
    How Cognitive Machines Can Augment Medical Imaging
    Miller DD, Brow EW
    AJR 2019; 212:9–14
  • “A multilayered deep CNN can discriminate pixel depths of 32 bits, far exceeding the typical human visual resolution capacity of 8 bits. This allows AI scientists to apply GANs to attack deep CNN layers by modifying the 32-bit pixel information to the point where a computer erroneously perceives a picture of a panda as a gibbon, while humans still clear- ly see a panda. This CNN vulnerability can be exploited for medical applications: GANs can create medical records of patient characteristics to determine new drug efficacy in an uncommon disease phenotype or to derive virtual images from another entirely different digital imaging modality.”
    How Cognitive Machines Can Augment Medical Imaging
    Miller DD, Brow EW
    AJR 2019; 212:9–14
  • "Global imaging system and software companies have access to diverse imaging data repositories. They are actively entering the cognitive marketplace, either alone (e.g., Philips with Illumeo) or in partnership with AI industry leaders (e.g., Agfa with IBM Watson) . Public-private partnerships in the United Kingdom (National Health Service, Cancer Research UK Imperial Centre and OPTIMAM, DeepMind Health, Google) and the United States (University of California San Francisco, Western Digital, NVIDIA) are compiling big digital mammography databases to train AI machines for accurate breast cancer screening.”
    How Cognitive Machines Can Augment Medical Imaging
    Miller DD, Brow EW
    AJR 2019; 212:9–14
  • “The potential applications of AI to the field of medical imaging remain to be fully elucidated because the underlying computing technology continues to rapidly improve and to be tested in the clinical environment. One feature that is unique to this field of computer science and to AI in particular is the propensity for re- searchers from the public and private sectors to orally present and discuss their findings at scientific sessions well in advance of or in lieu of publishing full manuscripts in the peer-reviewed literature. Much of what is typically done create a solid scientific evidence basis for the use of (and reimbursement for) a new medical technology is missing from this AI orthopraxy.”
    How Cognitive Machines Can Augment Medical Imaging
    Miller DD, Brow EW
    AJR 2019; 212:9–14
  • ”Soon, powerful third-wave AI technologies will seamlessly link NLP skills with vision tasks, greatly enhancing human understanding of information and images. Humans informed by intelligent machines will compute novel insights from diverse digital images in big data repositories. At some future uncertain time, data science and AI applications will enhance human under- standing of the veracity of all things digital. Although this augmented future approaches, imperfect humans and machines remain purposefully and necessarily juxtaposed.”
    How Cognitive Machines Can Augment Medical Imaging
    Miller DD, Brow EW
    AJR 2019; 212:9–14

  • Artificial Intelligence- The Next Digital Frontier McKinsey Global Institute(2017)

  • Artificial Intelligence- The Next Digital Frontier McKinsey Global Institute(2017)
  • Hospitals also could improve their capacity utilization by employing AI solutions to optimize many ordinary business tasks. Virtual agents could automate routine patient interactions. Speech recognition software has been used in client services, where it has reduced the expense of processing patients by handling routine tasks such as scheduling appointments and registering people when they enter a hospital. Natural language processing can analyze journal articles and other documents and digest their contents for quick access by doctors. These kinds of applications can have a significant impact without needing to pass a regulatory review.
    Artificial Intelligence- The Next Digital Frontier
    McKinsey Global Institute(2017)
  • We have found that if a sector was slow to adopt digital technologies, it tends to trail the pack in putting AI to use, too. Our report Digital America found that almost one-quarter of the nation’s hospitals and more than 40 percent of its office-based physicians have not yet adopted electronic health record systems.63 Even those that do have electronic record systems may not be sharing data seamlessly with the patient or with other providers; tests are repeated needlessly and patients are required to recount their medical histories over and over because these systems are not interoperable. Another MGI report, The age of analytics, found that the US health-care sector has realized only 10 to 20 percent of its opportunities to use advanced analytics and machine learning.
    Artificial Intelligence- The Next Digital Frontier
    McKinsey Global Institute(2017)
  • Patients also can benefit directly from the rise of AI in health care. Standardized treatments do not work for every patient, given the complexity of each person’s history and genetic makeup, so researchers are using advanced analytics to personalize regimens. Decisions can be based on data analysis and patient monitoring with use of remote diagnostic devices. A startup called Turbine uses AI to design personalized cancer-treatment regimens. The technology models cell biology on the molecular level and seeks to identify the best drug to use for specific tumors. It can also identify complex biomarkers and search for combination therapies by performing millions of simulated experiments each day .
    Artificial Intelligence- The Next Digital Frontier
    McKinsey Global Institute(2017)
  • Medical practices have taken small steps toward incorporating AI into patient management, introducing speech recognition and other language AI technologies to automate steps in the process. In the future, virtual assistants equipped with speech recognition, image recognition, and machine learning tools will be able to conduct consultations, make diagnoses, and even prescribe drugs. If these systems lack enough information to reach a conclusion, a virtual agent could order additional tests and schedule them with the patient. In rural areas, virtual agents will be able to conduct remote consultations. However, this scenario would require patients, providers, and regulators to become comfortable with fully automated diagnosis and prescriptions.
    Artificial Intelligence- The Next Digital Frontier
    McKinsey Global Institute(2017)
  • Prediction Machines The Simple Economics of Artificial Intelligence
    Agrawal A, Gans J, Goldfarb A
    Harvard Business Review Press 2018
  • AI and its developments and its impact are not always obvious
    - How many saw Steve Jobs introduction of the iPhone in 2007 mean the end of the beginning for the ”Yellow Cab” industry? Uber and Lyft rely on the iPhone.
    - Do you realize that Google is only 20 years old?
  • AI in Medicine: Diagnosis vs Prediction
    - If I read a CT and find a body of the pancreas mass that looks like an PDAC am I making a prediction or a diagnosis?
    - This may help reduce the burden of proof for the FDA if it is a prediction system and not a diagnosis machine
  • Should we stop training Radiologists?
    “whether Radiologists have a future depends on whether they are best positioned to undertake these roles, if other specialists will replace them, or if new job classes will develop, such as a combined radiologist/pathologist (i.e., a role where the radiologist also analyzes biopsies, perhaps performed immediately after imaging.”
  • Should we stop training Radiologists?
    ”Therefore five clear roles for humans in the use of medical imaging will remain, at least in the short and medium term; choosing the image, using real time images in medical procedures, interpreting machine output, training machines on new technologies, and employing judgement that may lead to overriding the prediction machines recommendation, perhaps on information unavailable to the machine.”
  • “Consider Amara’s law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” At the present, we overestimate the degree to which imaging diagnosis will be affected by machine learn- ing in the present moment and we underestimate the role that radiologists have to play in the development and deployment of these technologies. However, given the inevitable, it is essential for radiologists to stay abreast of developments in the machine learning field.”
    Machine Learning in Radiology: Resistance Is Futile
    Larvie M et al.
    Radiology 2019; 00:1-2
    https://doi.org/10.1148/radiol.2018182312
  • "Machine learning technologies are now deeply embedded in our medical information systems. These methods will ultimately be pervasive in the digital realm of radiology. Resistance really is futile. But that’s okay: The best applications will address pressing clinical needs and improve radiology care. Radiologists are well situated both to contribute to this technological progress, as well as to benefit from machine learning applications in their work. Done well, this will lead to improved patient outcomes and large advances for radiology practice”.
    Machine Learning in Radiology: Resistance Is Futile
    Larvie M et al.
    Radiology 2019; 00:1-2
    https://doi.org/10.1148/radiol.2018182312
  • This paper presents a Recurrent Saliency Transformation Network. The key innovation is a saliency transformation module, which repeatedly converts the segmentation probability map from the previous iteration as spatial weights and applies these weights to the current iteration. This brings us two-fold benefits. In training, it allows joint optimization over the deep networks dealing with different input scales. In testing, it propagates multi-stage visual information throughout iterations to improve segmentation accuracy.
    Recurrent Saliency Transformation Network: Incorporating Multi-Stage Visual Cues for Small Organ Segmentation
    Qihang Yu, Lingxi Xie, Yan Wang, Yuyin Zhou, Elliot K. Fishman, Alan L. Yuille
    AMVIX (in Press)
  • We aim at segmenting small organs (e.g., the pancreas) from abdominal CT scans. As the target often occupies a relatively small region in the input image, deep neural networks can be easily confused by the complex and vari- able background. To alleviate this, researchers proposed a coarse-to-fine approach, which used prediction from the first (coarse) stage to indicate a smaller input region for the second (fine) stage
  • We present the Recurrent Saliency Transformation Network, which enjoys three advantages.
    (i) Benefited by a (recurrent) global energy function, it is easier to generalize our models from training data to testing data.
    (ii) With joint optimization over two networks, both of them get improved individually.
    (iii) By incorporating multi-stage visual cues, more accurate segmentation results are obtained. As the fine stage is less likely to be confused by the lack of contexts, we also observe better convergence during iterations.
  • Despite its effectiveness, this algorithm dealt with two stages individually, which lacked optimizing a global energy function, and limited its ability to incorporate multi-stage visual cues. Missing contextual information led to unsatisfying convergence in iterations, and that the fine stage sometimes produced even lower segmentation accuracy than the coarse stage.
  • “In this paper, we adopt 3D Convolutional Neural Networks to segment volumetric medical images. Although deep neural networks have been proven to be very effective on many 2D vision tasks, it is still challenging to apply them to 3D tasks due to the limited amount of annotated 3D data and limited computational resources. We propose a novel 3D-based coarse-to-fine framework to effectively and efficiently tackle these challenges.”
    A 3D Coarse-to-Fine Framework for Volumetric Medical Image Segmentation
    Zhu Z, Xia Y, Shen W, Fishman EK, Yuille A
    2018 International Conference on 3D Vision (3DV)
    Page(s):682–690
    DOI: 10.1109/3DV.2018.0008310.1109/3DV.2018.00083
  • “ The proposed 3D-based framework outperforms the 2D counterpart to a large margin since it can leverage the rich spatial information along all three axes. We conduct experiments on two datasets which include healthy and pathological pancreases respectively, and achieve the current state-of-the-art in terms of Dice-Sørensen Coefficient (DSC). On the NIH pancreas segmentation dataset, we outperform the previous best by an average of over 2%, and the worst case is improved by 7% to reach almost 70%, which indicates the reliability of our framework in clinical applications.”
    A 3D Coarse-to-Fine Framework for Volumetric Medical Image Segmentation
    Zhu Z, Xia Y, Shen W, Fishman EK, Yuille A
    2018 International Conference on 3D Vision (3DV)
    Page(s):682–690
    DOI: 10.1109/3DV.2018.00083
  • “In this paper, we adopt 3D Convolutional Neural Networks to segment volumetric medical images. Although deep neural networks have been proven to be very effective on many 2D vision tasks, it is still challenging to apply them to 3D tasks due to the limited amount of annotated 3D data and limited computational resources. We propose a novel 3D-based coarse-to-fine framework to effectively and efficiently tackle these challenges. The proposed 3D-based framework outperforms the 2D counterpart to a large margin since it can leverage the rich spatial information along all three axes. We conduct experiments on two datasets which include healthy and pathological pancreases respectively, and achieve the current state-of-the-art in terms of Dice-Sørensen Coefficient (DSC). On the NIH pancreas segmentation dataset, we outperform the previous best by an average of over 2%, and the worst case is improved by 7% to reach almost 70%, which indicates the reliability of our framework in clinical applications.”
    A 3D Coarse-to-Fine Framework for Volumetric Medical Image Segmentation
    Zhu Z, Xia Y, Shen W, Fishman EK, Yuille A
    2018 International Conference on 3D Vision (3DV)
    Page(s):682–690
    DOI: 10.1109/3DV.2018.00083
  • "Machine learning is a method of data science that provides computers with the ability to learn without being programmed with explicit rules. Machine learning enables the creation of algorithms that can learn and make predictions. In contrast to rules-based algorithms, machine learning takes advantage of increased exposure to large and new data sets and has the ability to improve and learn with experience.” 


    Current Applications and Future Impact of Machine Learning in Radiology 
Garry Choy et al.
 Radiology 2018; 00:1–11
  • “Machine learning tasks are typically classified into three broad categories, depending on the type of task: 
supervised, unsupervised, and reinforcement learning.” 
 Current Applications and Future Impact of Machine Learning in Radiology 
Garry Choy et al.
 Radiology 2018; 00:1–11
    “In supervised learning, data labels are provided to the algorithm in the training phase (there is supervision in training). The expected outputs are usually labeled by human experts and serve as ground truth for the algorithm. The goal of the algorithm is usually to learn a general rule that maps inputs to outputs. In machine learning, ground truth refers to the data assumed to be true. In unsupervised learning, no data labels are given to the learning algorithm.”


    Current Applications and Future Impact of Machine Learning in Radiology 
Garry Choy et al.
 Radiology 2018; 00:1–11
  • “In unsupervised learning, no data labels are given to the learning algorithm. The goal of the machine learning task is to find the hidden structure in the data and to separate data into clusters or groups. In reinforcement learning, a computer program performs a certain task in a dynamic environment in which it receives feedback in terms of positive and negative reinforcement (such as playing a game against an opponent). Reinforcement learning is learning from the consequences of interactions with an environment without being explicitly taught.”


    Current Applications and Future Impact of Machine Learning in Radiology 
Garry Choy et al.
 Radiology 2018; 00:1–11
  • “Artificial neural networks are statistical and math- ematical methods that are a subset of machine learning. These networks are inspired by the way biologic nervous systems process information with a large number of highly interconnected processing elements, which are called neurons, nodes, or cells. An artificial neural network is structured as one input layer of neurons, one or more “hidden layers,” and one output layer. Each hidden layer is made up of a set of neurons, where each neuron is fully connected to all neurons in the previous layer.”


    Current Applications and Future Impact of Machine Learning in Radiology 
Garry Choy et al.
 Radiology 2018; 00:1–11
  • “For the foreseeable future, widespread application of machine learning algorithms in diagnostic radiology is not expected to reduce the need for radiologists. Instead, these techniques are expected to improve radiology work ow, increase radiologist productivity, and enhance patient care and satisfaction.”


    Current Applications and Future Impact of Machine Learning in Radiology 
Garry Choy et al.
 Radiology 2018; 00:1–11
  • “Collection of high-quality ground truth data, development of generalizable and diagnostically accurate techniques, and work ow integration are key challenges for the creation and adoption of machine learning models in radiology practice.”


    Current Applications and Future Impact of Machine Learning in Radiology 
Garry Choy et al.
 Radiology 2018; 00:1–11
  • “In general, machine learning techniques are developed by using a train-test system. Three primary sets of data for training, testing, and validation are ideally needed. The training data set is used to train the model. During training, the algorithm learns from examples. The validation set is used to evaluate different model fits on a separate data and to tune the model parameters. Most training approaches tend to overfit the training data, meaning that they find relationships that fit the training data set well but do not hold in general. Therefore, successive iterations of training and validation may be performed to optimize the algorithm and avoid over fitting. In the testing set, after a machine learning algorithm is initially developed, the final model fit may then be applied to an independent testing data set to assess the performance, accuracy, and generalizability of the algorithm.”


    Current Applications and Future Impact of Machine Learning in Radiology 
Garry Choy et al.
 Radiology 2018; 00:1–11
  • “Fundamentally, machine learning is powerful because it is not “brittle.” A rules-based approach may break when exposed to the real world, because the real world often offers examples that are not captured within the rules that programmer uses to de ne an algorithm. With machine learning, the system simply uses statistical approximation to respond most appropriately based on its training set, which means that it is flexible. Additionally, machine learning is a powerful tool because it is generic, that is, the same concepts are used for self-driving cars as is used for medical imaging interpretation. Generalizability of machine learning allows for rapid expansion in different fields, including medicine.”


    Current Applications and Future Impact of Machine Learning in Radiology 
Garry Choy et al.
 Radiology 2018; 00:1–11
  • There are a number of ways that the field of deep learning has been characterized. Deep learning is a class of machine learning algorithms that use a cascade of many layers of nonlinear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as input. The algorithms may be supervised or unsupervised and applications include pattern analysis (unsupervised) and classification (supervised).are based on the (unsupervised) learning of multiple levels of features or representations of the data. Higher level features are derived from lower level features to form a hierarchical representation.
are part of the broader machine learning field of learning representations of data.learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.

 Wikipedia
  • Deep learning algorithms are based on distributed representations. The underlying assumption behind distributed representations is that observed data are generated by the interactions of factors organized in layers. Deep learning adds the assumption that these layers of factors correspond to levels of abstraction or composition. Varying numbers of layers and layer sizes can be used to provide different amounts of abstraction.
 Wikipedia
  • Situational Awareness
    Situation awareness involves being aware of what is happening in the vicinity to understand how information, events, and one's own actions will impact goals and objectives, both immediately and in the near future. One with an adept sense of situation awareness generally has a high degree of knowledge with respect to inputs and outputs of a system, an innate "feel" for situations, people, and events that play out because of variables the subject can control. Lacking or inadequate situation awareness has been identified as one of the primary factors in accidents attributed to human error.[1] Thus, situation awareness is especially important in work domains where the information flow can be quite high and poor decisions may lead to serious consequences (such as piloting an airplane, functioning as a soldier, or treating critically ill or injured patients).
  • “For the biomedical image computing, machine learning, and bioinformatics scientists, the aforementioned challenges will present new and exciting opportunities for developing new feature analysis and machine learning opportunities. Clearly though, the image computing community will need to work closely with the pathology community and potentially whole slide imaging and microscopy vendors to be able to develop new and innovative solutions to many of the critical image analysis challenges in digital pathology.”


    Image analysis and machine learning in digital pathology: Challenges and opportunities.
Madabhushi A, Lee G
Med Image Anal. 2016 Oct;33:170-5.
  • OBJECTIVE. The purposes of this article are to describe concepts that radiologists should understand to evaluate machine learning projects, including common algorithms, su- pervised as opposed to unsupervised techniques, statistical pitfalls, and data considerations for training and evaluation, and to brie y describe ethical dilemmas and legal risk. 

    CONCLUSION. Machine learning includes a broad class of computer programs that improve with experience. The complexity of creating, training, and monitoring machine learning indicates that the success of the algorithms will require radiologist involvement for years to come, leading to engagement rather than replacement. 


    Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “ML comprises a broad class of statistical analysis algorithms that iteratively improve in response to training data to build models for autonomous predictions. In other words, computer program performance improves automatically with experience . The goal of an ML algorithm is to develop a mathematic model that is the data. Once this model is known data, it can be used to predict the labels of new data. Because radiology is inherently a data interpretation profession in extracting features from images and applying a large knowledge base to interpret those features—it provides ripe opportunities to apply these tools to improve practice.”


    Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
    










  • Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760


  • 







Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
    










  • Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “Most ML relevant to radiology is super- vised. In supervised ML, data are labeled before the model is trained. For example, in training a project to identify a specific brain tumor type, the label would be tumor pathologic results or genomic information. These labels, also known as ground truth, can be as specific or general as needed to answer the question. The ML algorithm is exposed to enough of these labeled data to allow them to morph into a model designed to answer the question of interest. Because of the large number of well-labeled images required to train models, curating these datasets is often laborious and expensive.”

    
Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “ML encompasses many powerful tools with the potential to dramatically increase the information radiologists extract from images. It is no exaggeration to suggest the tools will change radiology as dramatically as the advent of cross-sectional imaging did. We believe that owing to the narrow scope of existing applications of ML and the complexity of creating and training ML models, the possibility that radiologists will be replaced by machines is at best far in the future. Successful application of ML to the radiology domain will require that radiologists extend their knowledge of statistics and data science to supervise and correctly interpret ML-derived results.”

    
Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  •  
  • “This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on.”


    Deep Learning in Medical Image Analysis.
Shen D, Wu G, Suk HI
Annu Rev Biomed Eng. 2017 (in press)

  • Unlike in the fields of medicine and health, in the field of artificial intelligence and machine learning, the term validation often refers to the fine-tuning stage of model development, and another term, test, is used instead to mean the process of verifying model performance. 


    Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction 
Park SH, Han K
Radiology 2018; 286:800–809
  • “Evaluation of the clinical performance of a diagnostic or predictive artificial intelligence model built with high-dimensional data requires use of external data from a clinical cohort that ade- quately represents the target patient population to avoid over-estimation of the results due to over fitting and spectrum bias.” 


    Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction 
Park SH, Han K
Radiology 2018; 286:800–809
  • “The ultimate clinical verification of a diagnostic or predictive artificial intelligence tools requires a demonstration of their value through effect on patient outcomes, beyond performance metrics; this can be achieved through clinical trials or well- designed observational outcome research.”


    Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction 
Park SH, Han K
Radiology 2018; 286:800–809
  • “Fistulas should be described by the 2 epithelial structures they connect (eg, enteroenteric, enterocolic, enterocutaneous, rectovaginal, or enterovesical). Enteric fistulas within the abdominal cavity should be described as simple or complex similar to perianal fistulas . Complex, asterisk-shaped fistula complexes are often seen that tether multiple loops of small bowel and/or colon.”


    Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction 
Park SH, Han K
Radiology 2018; 286:800–809
  • “Artificial intelligence is the branch of computer science devoted to creating systems to perform tasks that ordinarily require human intelligence. This is a broad umbrella term encompassing a wide variety of subfields and techniques; in this article, we focus on deep learning as a type of machine learning.”

    
Deep Learning: A Primer for Radiologists
Chartrand G et al.
RadioGraphics 2017; 37:2113–2131
  • “Machine learning is the subfield of arti cial intelligence in which algorithms are trained to perform tasks by learning patterns from data rather than by explicit programming. In classic machine learning, expert humans discern and encode features that appear distinctive in the data, and statistical techniques are used to organize or segregate the data on the basis of these features.”


    Deep Learning: A Primer for Radiologists
Chartrand G et al.
RadioGraphics 2017; 37:2113–2131
  • “AI using deep learning demonstrates promise for detecting critical findings at noncontrast-enhanced head CT. A dedicated algorithm was required to detect SAI. Detection of SAI showed lower sensitivity in comparison to detection of HMH, but showed reasonable performance. Findings sup- port further investigation of the algorithm in a controlled and prospective clinical setting to determine whether it can independently screen noncontrast-enhanced head CT examinations and notify the interpreting radiologist of critical findings.”


    Automated Critical Test Findings identification and Online notification system Using artificial intelligence in imaging 
Prevedello LM et al.
Radiology (in press

  • “To evaluate the performance of an arti cial intelligence (AI) tool using a deep learning algorithm for detecting hemorrhage, mass effect, or hydrocephalus (HMH) at non—contrast material–enhanced head computed tomo- graphic (CT) examinations and to determine algorithm performance for detection of suspected acute infarct (SAI).”


    Automated Critical Test Findings identification and Online notification system Using artificial intelligence in imaging 
Prevedello LM et al.
Radiology (in press
  • “Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data.”

    
Toolkits and Libraries for Deep Learning 
Bradley J. Erickson et al. 
J Digit Imaging (2017) 30:400–405
  • “Even more exciting is the finding that in some cases, computers seem to be able to “see” patterns that are beyond human perception.This discovery has led to substantial and increased interest in the field of machine learning— specifically, how it might be applied to medical images.”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)

  • “These algorithms have been used for several challenging tasks, such as pulmonary embolism segmentation with computed tomographic (CT) angiography (3,4), polyp detection with virtual colonoscopy or CT in the setting of colon cancer (5,6), breast cancer detection and diagnosis with mammography (7), brain tumor segmentation with magnetic resonance (MR) imaging (8), and detection of the cognitive state of the brain with functional MR imaging to diagnose neurologic disease (eg, Alzheimer disease).”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “If the algorithm system optimizes its parameters such that its performance improves—that is, more test cases are diagnosed correctly—then it is considered to be learning that task.”



    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “Training: The phase during which the ma- chine learning algorithm system is given labeled example data with the answers (ie, labels)—for example, the tumor type or correct boundary of a lesion.The set of weights or decision points for the model is updated until no substantial improvement in performance is achieved.”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “Deep learning, also known as deep neural network learning, is a new and popular area of research that is yielding impressive results and growing fast. Early neural networks were typi- cally only a few (<5) layers deep, largely because the computing power was not sufficient for more layers and owing to challenges in updating the weights properly. Deep learning refers to the use of neural networks with many layers—typically more than 20.”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “CNNs are similar to regular neural networks. The difference is that CNNs assume that the inputs have a geometric relationship—like the rows and columns of images. The input layer of a CNN has neurons arranged to produce a convolution of a small image (ie, kernel) withthe image.This kernel is then moved across the image, and its output at each location as it moves across the input image creates an output value. Although CNNs are so named because of the convolution kernels, there are other important layer types that they share with other deep neural networks. Kernels that detect important features (eg, edges and arcs) will have large outputs that contribute to the final object to be detected.”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “Machine learning is already being applied in the practice of radiology, and these applications will probably grow at a rapid pace in the near future.The use of machine learning in radiology has important implications for the practice of medicine, and it is important that we engage this area of research to ensure that the best care is afforded to patients. Understanding the properties of machine learning tools is critical to ensuring that they are applied in the safest and most effective manner.”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
© 1999-2019 Elliot K. Fishman, MD, FACR. All rights reserved.