Deep Learning Pearls

  

  • OBJECTIVE. Although extensive attention has been focused on the enormous potential of artificial intelligence (AI) technology, a major question remains: how should this fundamentally new technology be regulated? The purpose of this article is to provide an overview of the pathways developed by the U.S. Food and Drug Administration to regulate the incorporation of AI in medical imaging.
    CONCLUSION. AI is the new wave of innovation in health care. The technology holds promising applications to revolutionize all aspects of medicine.
    Concepts in U.S. Food and Drug Administration Regulation of Artificial Intelligence for Medical Imaging Kohli A et al.
    AJR 2019; 213:886–888
  • “Geoffrey Hinton (Toronto) said “if you work as a radiologist, you’re like the coyote that’s already over the edge of the cliff, but hasn’t yet looked down so doesn’t realise there’s no ground underneath him. People should stop training radiologists now. It’s just completely obvious that within 5 years, deep learning is going to do better than radiologists. We’ve got plenty of radiologists already ”.
    What the radiologist should know about artificial intelligence – an ESR white paper
    Insights into Imaging (2019) 10:44 https://doi.org/10.1186/s13244-019-0738-2
  • “Deep learning is a subset of machine learning and is the basis of most AI tools for image interpretation. Deep learning means that the computer has multiple layers of algorithms interconnected and stratified into hierarchies of importance (like more or less meaningful data). These layers accumulate data from inputs and provide an output that can change step by step once the AI system learns new features from the data. Such multi-layered algorithms form large artificial neural networks.”
    What the radiologist should know about artificial intelligence – an ESR white paper
    Insights into Imaging (2019) 10:44 https://doi.org/10.1186/s13244-019-0738-2
  • “The United States is the global leader in AI radiology publication productivity, accounting for almost half of total radiology AI output. Other countries have increased their productivity. Notably, China has increased its productivity exponentially to close to 20% of all AI publications. The top three most productive radiology subspecialties were neuroradiology, body and chest, and nuclear medicine.”
    Global Trend in Artificial Intelligence–Based Publications in Radiology From 2000 to 2018
    West E et al.
    AJR 2019; 213:1–3

  • Global Trend in Artificial Intelligence–Based Publications in Radiology From 2000 to 2018
    West E et al.
    AJR 2019; 213:1–3

  • Global Trend in Artificial Intelligence–Based Publications in Radiology From 2000 to 2018
    West E et al.
    AJR 2019; 213:1–3
  • ”Of note, China has increased its productivity exponentially, from less than 5% to close to 20% of all AI publications. China’s ability to exponentially increase productivity is likely due to the country’s unique research infrastructure. The availability of large centralized data and rapid implementation across commercial industries have already helped the nation become very productive in AI research in a short period. In addition, Chinese government di- rectives and funding for the advancement of AI have generated an incredible mobilization.”
    Global Trend in Artificial Intelligence–Based Publications in Radiology From 2000 to 2018
    West E et al.
    AJR 2019; 213:1–3
  • “Exponential growth in AI radiology research has occurred worldwide, with the United States leading overall AI research productivity. China has made the second biggest contribution, largely driven by unique research infrastructure ideal for AI research and significant government funding support. The future success of the United States will depend on continued government funding and prioritization of AI radiology research within the research community.”
    Global Trend in Artificial Intelligence–Based Publications in Radiology From 2000 to 2018
    West E et al.
    AJR 2019; 213:1–3
  • ”Exponential growth in AI radiology research has occurred worldwide, with the United States leading overall AI research productivity. China has made the second big- gest contribution, largely driven by unique research infrastructure ideal for AI research and significant government funding support. The future success of the United States will depend on continued government funding and prioritization of AI radiology research within the research community.”
    Global Trend in Artificial Intelligence–Based Publications in Radiology From 2000 to 2018
    West E et al.
    AJR 2019; 213:1–3
  • AI and the Future
  • Last December, the developers of AlphaZero published their explanation of the process by which the program mastered chess—a process, it turns out, that ignored human chess strategies developed over centuries and classic games from the past. Having been taught the rules of the game, AlphaZero trained itself entirely by self-play and, in less than 24 hours, became the best chess player in the world—better than grand masters and, until then, the most sophisticated chess-playing computer program in the world. It did so by playing like neither a grand master nor a preexisting program. It conceived and executed moves that both humans and human-trained machines found counterintuitive, if not simply wrong. The founder of the company that created AlphaZero called its performance “chess from another dimension” and proof that sophisticated AI “is no longer constrained by the limits of human knowledge.”
    The Metamorphosis
    HENRY A. KISSINGER,ERIC SCHMIDT,Daniel HUTTENLOCHER
    The Atlantic August 2019
  • “Google Home and Amazon’s Alexa are digital assistants already installed in millions of homes and designed for daily conversation: They answer queries and offer advice that, especially to children, may seem intelligent, even wise. And they can become a solution to the abiding loneliness of the elderly, many of whom interact with these devices as friends.The more data AI gathers and analyzes, the more precise it becomes, so devices such as these will learn their owners’ preferences and take them into account in shaping their answers. And as they get “smarter,” they will become more intimate companions. As a result, AI could induce humans to feel toward it emotions it is incapable of reciprocating.”
    The Metamorphosis
    HENRY A. KISSINGER,ERIC SCHMIDT,Daniel HUTTENLOCHER
    The Atlantic August 2019
  • The three of us differ in the extent to which we are optimists about AI. But we agree that it is changing human knowledge, perception, and reality—and, in so doing, changing the course of human history. We seek to understand it and its consequences, and encourage others across disciplines to do the same.
    The Metamorphosis
    HENRY A. KISSINGER,ERIC SCHMIDT,Daniel HUTTENLOCHER
    The Atlantic August 2019
  • “Distinguishing between “data-driven” and “AI-driven” isn’t just semantics. Each term reflects different assets, the former focusing on data and the latter processing ability. Data holds the insights that can enable better decisions; processing is the way to extract those insights and take actions. Humans and AI are both processors, with very different abilities. To understand how best to leverage each its helpful to review our own biological evolution and how decision-making has evolved in industry.”
    What AI-Driven Decision Making Looks Like
    Eric Colson
    Harvard Business Review July 2019
  • In response to this new data- rich environment we’ve adapted our workflows. IT departments support the flow of information using machines (databases, distributed file systems, and the like) to reduce the unmanageable volumes of data down to digestible summaries for human consumption. The summaries are then further processed by humans using the tools like spreadsheets, dashboards, and analytics applications. Eventually, the highly processed, and now manageably small, data is presented for decision-making. This is the “data-driven” workflow. Human judgment is still the central processor, but now it uses summarized data as a new input.
    What AI-Driven Decision Making Looks Like
    Eric Colson
    Harvard Business Review July 2019
  • “We need to evolve further,and bring AI into the workflow as a primary processor of data. For routine decisions that only rely on structured data, we’re better off delegating decisions to AI. AI is less prone to human’s cognitive bias. (There is a very real risk of using biased data that may cause AI to find specious relationships that are unfair. Be sure to understand how the data is generated in addition to how it is used.) AI can be trained to find segments in the population that best explain variance at fine-grain levels even if they are unintuitive to our human perceptions. AI has no problem dealing with thousands or even millions of groupings. And AI is more than comfortable working with nonlinear relationships, be they exponential, power laws, geometric series, binomial distributions, or otherwise.”
    What AI-Driven Decision Making Looks Like
    Eric Colson
    Harvard Business Review July 2019
  • “They key is that humans are not interfacing directly with data but rather with the possibilities produced by AI’s processing of the data. Values, strategy and culture is our way to reconcile our decisions with objective rationality. This is best done explicitly and fully informed. By leveraging both AI and humans we can make better decisions that using either one alone.”
    What AI-Driven Decision Making Looks Like
    Eric Colson
    Harvard Business Review July 2019
  • “This evolution is unlikely to occur within the individual organization, just as evolution by natural selection does not take place within individuals. Rather, it’s a selection process that operates on a population. The more efficient organizations will survive at higher rate. Since it’s hard to for mature companies to adapt to changes in the environment, I suspect we’ll see the emergence of new companies that embrace both AI and human contributions from the beginning and build them natively into their workflows.”
    What AI-Driven Decision Making Looks Like
    Eric Colson
    Harvard Business Review July 2019
  • “In radiology, for instance, some algorithms have performed image-bases diagnosis as well as or better than human experts. Yet it’s unclear if patients and medical institutions will trust AI to automate that job entirely. A University of California at San Diego pilot in which AI successfully diagnosed childhood diseases more accurately than junior-level pediatricians still required senior doctors to personally review and sign off on the diagnosis. The real aim is always going to be to use AI to collaborate with clinicians seeking higher precision — not try to replace them.”
    The Health Care Benefits of Combining Wearables and AI
    Moni Miyashita and Michael Brady
    Harvard Business Review May 2019
  • “Despite broad awareness of these trends, medical education continues to be largely information based, as if physicians are still the only source of medical knowledge. The reality of this web-enabled era is different. Patients readily garner more information, both correct and incorrect, to bring to clinical encounters and expect meaningful discussions with their physicians. These expectations challenge physicians not only to keep current but also to be able to communicate options to patients in a language that speaks meaningfully to their individual concerns and preferences.”
    Reimagining Medical Education in the Age of AI
    Steven A. Wartman, and C. Donald Combs
    AMA Journal of Ethics February 2019, Volume 21, Number 2: E146-152
  • In addition, the skills required of practicing physicians will increasingly involve facility in collaborating with and managing artificial intelligence (AI) applications that aggregate vast amounts of data, generate diagnostic and treatment recommendations, and assign confidence ratings to those recommendations. The ability to correctly interpret probabilities requires mathematical sophistication in stochastic processes, something current medical curricula address inadequately. In part, the need for more sophisticated mathematical understanding is driven by the analytics of precision and personalized medicine, which rely on AI to predict which treatment will work for a particular disease in a particular subgroup of patients.
    Reimagining Medical Education in the Age of AI
    Steven A. Wartman, and C. Donald Combs
    AMA Journal of Ethics February 2019, Volume 21, Number 2: E146-152
  • “As we pointed out earlier, the increasing incongruence between the organizing and retention capacities of the human mind and medicine’s growing complexity should compel significant re-engineering of medical school curricula. Curricula should shift from a focus on information acquisition to an emphasis on knowledge management and communication. Nothing manifests this need for change better than the observation that every patient is becoming a big data challenge. For clinicians, the need to understand probabilities—such as confidence ratings for diagnostic or therapeutic recommendations generated by an AI clinical decision support system—will likely increase as personalized medicine continues to enlarge its role in practice.”
    Reimagining Medical Education in the Age of AI
    Steven A. Wartman, and C. Donald Combs
    AMA Journal of Ethics February 2019, Volume 21, Number 2: E146-152
  • Accordingly, we advocate new curricula that respond to the challenges of AI while being less detrimental to learners’ mental health. These curricula should emphasize 4 major features: Knowledge capture, not knowledge retention; Collaboration with and management of AI applications; A better understanding of probabilities and how to apply them meaningfully in clinical decision making with patients and families; and The cultivation of empathy and compassion.
    Reimagining Medical Education in the Age of AI
    Steven A. Wartman, and C. Donald Combs
    AMA Journal of Ethics February 2019, Volume 21, Number 2: E146-152
  • * Knowledge capture, not knowledge retention
    * Collaboration with and management of AI applications
    * A better understanding of probabilities and how to apply them meaningfully in clinical decision making with patients and families; and
    * The cultivation of empathy and compassion.
    Reimagining Medical Education in the Age of AI
    Steven A. Wartman, and C. Donald Combs
    AMA Journal of Ethics February 2019, Volume 21, Number 2: E146-152
  • The Role of AI in the Diagnosis and Management of PDAC (2025)
    - Early detection of pancreatic cancer (FELIX)
    - Define the best management plan for the patient and the sequence (Surgery, Chemotherapy, Immunology, Radiation Therapy)
    - Predict ultimate survival for the patient based on a variable set of parameters
  • “Not surprisingly, though, as AI supercharges business and society, CEOs are under the spotlight to ensure their company’s responsible use of AI systems beyond complying with the spirit and letter of applicable laws. Ethical debates are well underway about what’s “right” and “wrong” when it comes to high-stakes AI applications such as autonomous weapons and surveillance systems. And there’s an outpouring of concern and skepticism regarding how we can imbue AI systems with human ethical judgment, when moral values frequently vary by culture and can be difficult to code in software.”
    Leading your organization to responsible AI
    Roger Burkhardt, Nicolas Hohn, and Chris Wigley
    McKinsey & Company (May 2019)
  • “AI development always involves trade-offs. For instance, when it comes to model development, there is often a perceived trade-off between the accuracy of an algorithm and the transparency of its decision making, or how easily predictions can be explained to stakeholders. Too great a focus on accuracy can lead to the creation of “black box” algorithms in which no one can say for certain why an AI system made the recommendation it did. Likewise, the more data that models can analyze, the more accurate the predictions, but also, often, the greater the privacy concerns.”
    Leading your organization to responsible AI
    Roger Burkhardt, Nicolas Hohn, and Chris Wigley
    McKinsey & Company (May 2019)
  • “Data serve as the fuel for AI. In general, the more data used to train systems, the more accurate and insightful the predictions. However, pressure on analytics teams to innovate can lead to the use of third-party data or the repurposing of existing customer data in ways that, while not yet covered by regulations, are considered inappropriate by consumers. For example, a healthcare provider might buy data about its patients—such as what restaurants they frequent or how much TV they watch—from data brokers to help doctors better assess each patient’s health risk.”
    Leading your organization to responsible AI
    Roger Burkhardt, Nicolas Hohn, and Chris Wigley
    McKinsey & Company (May 2019)
  • The use of artificial intelligence, and the deep-learning subtype in particular, has been enabled by the use of labeled big data, along with markedly enhanced computing power and cloud storage, across all sectors. In medicine, this is beginning to have an impact at three levels: for clinicians, predominantly via rapid, accurate image interpretation; for health systems, by improving workflow and the potential for reducing medical errors; and for patients, by enabling them to process their own data to promote health. The current limitations, including bias, privacy and security, and lack of transparency, along with the future directions of these applications will be discussed in this article. Over time, marked improvements in accuracy, productivity, and workflow will likely be actualized, but whether that will be used to improve the patient–doctor relationship or facilitate its erosion remains to be seen.
    High-performance medicine: the convergence of human and artificial intelligence
    Eric J. Topol
    NATURE MEDICINE | VOL 25 | January 2019 | 44–56 |
  • The use of artificial intelligence, and the deep-learning subtype in particular, has been enabled by the use of labeled big data, along with markedly enhanced computing power and cloud storage, across all sectors. In medicine, this is beginning to have an impact at three levels: for clinicians, predominantly via rapid, accurate image interpretation; for health systems, by improving workflow and the potential for reducing medical errors; and for patients, by enabling them to process their own data to promote health. The current limitations, including bias, privacy and security, and lack of transparency, along with the future directions of these applications will be discussed in this article.
    High-performance medicine: the convergence of human and artificial intelligence
    Eric J. Topol
    NATURE MEDICINE | VOL 25 | January 2019 | 44–56 |
  • The second is the generation of data in massive quantities, from sources such as high-resolution medical imaging, biosensors with continuous output of physiologic metrics, genome sequenc- ing, and electronic medical records. The limits on analysis of such data by humans alone have clearly been exceeded, necessitating an increased reliance on machines. Accordingly, at the same time that there is more dependence than ever on humans to provide healthcare, algorithms are desperately needed to help.
    High-performance medicine: the convergence of human and artificial intelligence
    Eric J. Topol
    NATURE MEDICINE | VOL 25 | January 2019 | 44–56 |
  • “Similarly, DNNs have been applied across a wide variety of medical scans, including bone films for fractures and estimation of aging, classification of tuberculosis, and vertebral compression fractures; computed tomography scans for lung nodule, liver masses, pancreatic cancer, and coronary calcium score; brain scans for evidence of hemorrhage, head trauma, and acute referrals; magnetic resonance imaging; echocardiograms; and mammographies. “
    High-performance medicine: the convergence of human and artificial intelligence
    Eric J. Topol
    NATURE MEDICINE | VOL 25 | January 2019 | 44–56 |
  • “Similarly, DNNs have been applied across a wide variety of medical scans, including bone films for fractures and estimation of aging, classification of tuberculosis, and vertebral compression fractures; computed tomography scans for lung nodule, liver masses, pancreatic cancer, and coronary calcium score; brain scans for evidence of hemorrhage, head trauma, and acute referrals; magnetic resonance imaging; echocardiograms; and mammographies.”
    High-performance medicine: the convergence of human and artificial intelligence
    Eric J. Topol
    NATURE MEDICINE | VOL 25 | January 2019 | 44–56 |
  • “Furthermore, the lack of large datasets of carefully annotated images has been limiting across various disciplines in medicine. Ironically, to compensate for this deficiency, generative adversarial networks have been used to synthetically produce large image datasets at high resolution, including mammograms, skin lesions, echocardiograms, and brain and retina scans, that could be used to help train DNNs.”
    High-performance medicine: the convergence of human and artificial intelligence
    Eric J. Topol
    NATURE MEDICINE | VOL 25 | January 2019 | 44–56 |


  • High-performance medicine: the convergence of human and artificial intelligence
    Eric J. Topol
    NATURE MEDICINE | VOL 25 | January 2019 | 44–56 |

  • Harvard Business Review Jan-Feb 2018
  • In 2013 the MD Anderson Cancer Center launched a “moon shot” project: diagnose and recommend treatment plans for certain forms of cancer using IBM’s Watson cognitive system. But in 2017, the project was put on hold after costs topped $62 million—and the system had yet to be used on patients.
  • Determining the use cases
    The second area of assessment evaluates the use cases in which cognitive applications would generate substantial value and contribute to business success. Start by asking key questions such as: How critical to your overall strategy is addressing the targeted problem? How difficult would it be to implement the proposed AI solution—both technically and organizationally? Would the benefits from launching the application be worth the effort? Next, prioritize the use cases according to which offer the most short- and long-term value, and which might ultimately be integrated into a broader platform or suite of cognitive capabilities to create competitive advantage.
  • Artificial Intelligence in Practice (AI): Applications
    - Congestive heart failure
    - Alzheimer's disease
    - Pneumonia
    - Lung nodule evaluation
    - Wrist fractures
    - Pancreatic cancer
  • Artificial Intelligence in Practice (AI): Applications
    - Plain x-ray
    - Ultrasound
    - CT
    - MRI
    - PET/CT
  • “Although elegant, Lakhani and Sundaram have a software result, not a hardware result. In most software research, the only individuals with the algorithm are the researchers. Without the AI algorithm, the results cannot be reproduced. Many AI publications are transient—they are proof-of-concept; they cannot be validated. As a radiologist, you cannot implement the AI research in your clinical practice without the algorithm, and the algorithms are largely discarded. In this setting, there is near zero chance that practice guidelines will be changed.”
    Editor’s Note: Publication of AI Research in Radiology
    Bluemke DA
    Radiology 2018 (in press)
    https://doi.org/10.1148/radiol.2018184021
  • New AI research in radiology is amazing. Our dis- cipline has tried for 30 or more years for computers to help us analyze our images. Prior non-AI approaches have mostly not succeeded. In my research lab, technologists and pre- and postdoctoral students analyzed thousands of cardiac MRI cases by drawing circles at the borders of the heart for the last 20 years. Yet in 6 months or less, AI neural networks are now trained to draw those circles better and more consistently than any of our prior efforts. My reaction to seeing new AI developments is equivalent to “shock and awe.”
    Editor’s Note: Publication of AI Research in Radiology
    Bluemke DA
    Radiology 2018 (in press)
    https://doi.org/10.1148/radiol.2018184021
  • “Our first policy affecting AI research is regarding pre-print servers, such as arXiv.org. AI researchers frequently put their latest algorithms on arXiv to claim "I’m first" supremacy. arXiv publications are not peer reviewed. They do however look like normal publications—especially to laypersons. Preprint servers are used by AI researchers to rapidly share software, algorithms, and ideas.”
    Editor’s Note: Publication of AI Research in Radiology
    Bluemke DA
    Radiology 2018 (in press)
    https://doi.org/10.1148/radiol.2018184021
  • The policy of Radiology is to discourage authors from placing their results on preprint servers. There are two reasons for this. First, if the results are already avail- able, the incremental benefit of publication in Radiology is low. Second, the vast majority of submissions for publication undergo substantial changes due to peer review and editorial processes.
    Editor’s Note: Publication of AI Research in Radiology
    Bluemke DA
    Radiology 2018 (in press)
    https://doi.org/10.1148/radiol.2018184021 
  • “Our second policy affecting AI research is to strongly encourage making the computer algorithms available to other researchers. Authors of AI research should make a git archive of their source code or make it available on the author’s web page. Git archive providers such as GitHub, Bitbucket, or Source Forge are already available and in use by some research- ers. Authors should place a link to the web page for their code in their Materials and Methods section. They should also provide a unique identifier for the revision of the code used in the publication.”
    Editor’s Note: Publication of AI Research in Radiology
    Bluemke DA
    Radiology 2018 (in press)
    https://doi.org/10.1148/radiol.2018184021
  • When AI truly succeeds in medical imaging, we will stop calling it AI. The AI portions will simply be integrated tools in our PACS, scanner, or workstation—not separate features.
    Editor’s Note: Publication of AI Research in Radiology
    Bluemke DA
    Radiology 2018 (in press)
    https://doi.org/10.1148/radiol.2018184021
  • "Artificial neural networks are inspired by the ability of brains to learn complicated patterns in data by changing the strengths of synaptic connections between neurons. Deep learning uses deep networks with many intermediate layers of artificial "neurons" between the input and the output, and, like the visual cortex, these artificial neurons learn a hierarchy of progressively more complex feature detectors. By learning feature detectors that are optimized for classification, deep learning can substantially outperform systems that rely on features supplied by domain experts or that are designed by hand."
    Deep Learning—A Technology With the Potential to Transform Health Care
    Geoffrey Hinton
    published online August 30, 2018]. JAMA. doi:10.1001/jama .2018.11100
  • "Understandably, clinicians, scientists, patients, and regulators would all prefer to have a simple explanation for how a neural net arrives at its classification of a particular case. In the example of predicting whether a patient has a disease, they would like to know what hidden factors the network is using. However, when a deep neural network is trained to make predictions on a big data set, it typically uses its layers of learned, nonlinear features to model a huge number of complicated but weak regularities in the data. It is generally infeasible to interpret these features because their meaning depends on complex interactions with uninterpreted features in other layers."
    Deep Learning—A Technology With the Potential to Transform Health Care
    Geoffrey Hinton
    published online August 30, 2018]. JAMA. doi:10.1001/jama .2018.11100
  • As data sets get bigger and computers become more powerful, the results achieved by deep learning will get better, even with no improvement in the basic learning techniques, although these techniques are being improved. The neural networks in the human brain learn from fewer data and develop a deeper, more abstract understanding of the world. In contrast to machine-learning algorithms that rely on provision of large amounts of labeled data, human cognition can find structure in unlabeled data, a process commonly termed unsupervised learning.
    Deep Learning—A Technology With the Potential to Transform Health Care
    Geoffrey Hinton
    published online August 30, 2018]. JAMA. doi:10.1001/jama .2018.11100
  • "The creation of a smorgasbord of complex feature detectors based on unlabeled data appears to set the stage for humans to learn a classifier from only a small amount of labeled data. How the brain does this is still a mystery, but will not remain so. As new unsupervised learning algorithms are discovered, the data efficiency of deep learning will be greatly augmented in the years ahead, and its potential applications in health care and other fields will increase rapidly."
    Deep Learning—A Technology With the Potential to Transform Health Care
    Geoffrey Hinton
    published online August 30, 2018]. JAMA. doi:10.1001/jama .2018.11100
  • "In 1976, Maxmen predicted that artificial intelligence (AI) in the 21st century would usher in "the post-physician era," with health care provided by paramedics and computers. Today, the mass extinction of physicians remains unlikely. However, as outlined by Hinton2 in a related Viewpoint, the emergence of a radically different approach to AI, called deep learning, has the potential to effect major changes in clinical medicine and health care delivery."
    On the prospects for a (deep) learning health care system
    NaylorCD
    [published online August 30, 2018]. JAMA. doi:10.1001/jama.2018.11103
  • "Deep learning had intuitive appeal for health- related applications, given its demonstrable strengths in intricate pattern recognition and predictive model building from big high-dimensional data sets. These analytic capabilities have already proven useful for basic and applied researchers, ranging across health disciplines. Thus far, clinical application of deep learning has been most rapid in image-intensive fields such as radiology, radiotherapy, pathology, ophthalmology, dermatology, and image-guided surgery."
    On the prospects for a (deep) learning health care system
    Naylor CD
    [published online August 30, 2018]. JAMA. doi:10.1001/jama.2018.11103
  • "In many cases, interpretation of images by deep learning systems has outperformed that by individual clinicians when measured against a consensus of expert readers or gold standards such as pathologic findings. Clinically relevant applications have widened beyond image processing to include risk stratification for a broad range of patient populations (eBox in the Supplement), and health care organizations are capitalizing on deep learning and other machine-learning tools to improve logistics, quality management, and financial oversight. "
    On the prospects for a (deep) learning health care system
    Naylor CD
    [published online August 30, 2018]. JAMA. doi:10.1001/jama.2018.11103
  • "Digital imaging in all its forms is becoming more powerful and more integral to medicine and health care. Unlike deep learning, expert human interpretation fails to capitalize on all the patterns, or "regularities," that can be extracted from very large data sets and used for interpretation of still and moving images. Deep learning and related machine- learning methods can also learn from massively greater numbers of images than any human expert, continue learning and adapting over time, mitigate interobserver variability, and facilitate better decision making and more effective image-guided therapy."
    On the prospects for a (deep) learning health care system
    Naylor CD
    [published online August 30, 2018]. JAMA. doi:10.1001/jama.2018.11103
  • Deep learning shows promise for streamlining routine work by health care professionals and empowering patients, thereby promoting a safer, more humane, and participatory paradigm for health care. Different sources offer varying estimates of the amount of time wasted by health care professionals on tasks amenable to some automation (eg, high-quality image screening) that could then be rededicated to more or better care. A growing number of research studies also suggest specific possibilities for reduction in errors and improved work flow in the clinical setting with appropriate deployment of AI.
    On the prospects for a (deep) learning health care system
    Naylor CD
    [published online August 30, 2018]. JAMA. doi:10.1001/jama.2018.11103
  • Deep learning has enormous capacity to inform the process of discovery in health research and to facilitate hypothesis generation by identifying novel associations. Established and start-up companies are using deep learning to select or design novel molecules for testing as pharmaceuticals or biologics, with in silico exploration preceding in vitro examination and in vivo experimentation. Researchers across disciplines have also found unexpected clusters within data sets by comparing the intensity of activation of feature detectors in the hidden layers of deep neural nets. As always, however, basic and clinical experimentation remains essential to establish causation and causal pathways.
    On the prospects for a (deep) learning health care system
    Naylor CD
    [published online August 30, 2018]. JAMA. doi:10.1001/jama.2018.11103
  • "In the longer term, deep learning can relate those personalized features to the clinical course of similar patients, using data from millions of patient records containing billions of medical events. Thus, while concerns are understandably raised that automation could de- humanize clinical care, these advances could provide professionals and patients alike with vastly better and more specific information, and, as Fogel and Kvedar argue, give physicians more time "to focus on the tasks that are uniquely human: building relationships, exercising empathy, and using human judgment to guide and advise."
    On the prospects for a (deep) learning health care system
    Naylor CD
    [published online August 30, 2018]. JAMA. doi:10.1001/jama.2018.11103
  • Deep learning is diffusing rapidly through a combination of open- source and proprietary programs. Technology giants are making massive investments in the development of software libraries for deep learning, some of which are open sourced. These huge enterprises, as well as start-ups, are applying deep learning tools to health care all over the world. Moreover, many academic and nonprofit teams are publishing and sharing algorithms freely, and local development is now widespread.
    On the prospects for a (deep) learning health care system
    Naylor CD
    [published online August 30, 2018]. JAMA. doi:10.1001/jama.2018.11103
  • However, unlike a standardized diagnostic test or drug, the performance of deep learning and other machine-learning methods improves with exposure to larger or more relevant data sets, or with easily made modifications to the architecture of the models or training procedures. Regulators and technology assessors will need to distinguish issues inherent in decision-support algorithms from those attributable to misuse by clinical decision makers. Procurement agencies and health care administrators will need to be uncharacteristically nimble to keep up..
    On the prospects for a (deep) learning health care system
    Naylor CD
    [published online August 30, 2018]. JAMA. doi:10.1001/jama.2018.11103

  • Deep Learning Approach for Evaluating Knee MR Images: Achieving High Diagnostic Performance for Cartilage Lesion Detection
    FangLiu et al.
    Radiology 2018 (in press)


  • "By now, it’s almost old news: big data will transform medicine. It’s essential to remember, however, that data by themselves are useless. To be useful, data must be analyzed, interpreted, and acted on. Thus, it is algorithms —
    not data sets — that will prove transformative."
    Predicting the Future — Big Data, Machine Learning, and Clinical Medicine
    Obermeyer Z, Emanuel EJ
    N Engl J Med 375;13 September 29, 2016
  • “But where machine learning shines is in handling enormous numbers of predictors — sometimes, remarkably, more predictors than observations — and combining them in nonlinear and highly interactive ways.This capacity al- lows us to use new kinds of data, whose sheer volume or complexity would previously have made analyzing them unimaginable.”


    Predicting the Future — Big Data, Machine Learning, and Clinical Medicine
Obermeyer Z, Emanuel EJ
N Engl J Med 375;13 September 29, 2016
  • “Another key issue is the quantity and quality of input data. Machine learning algorithms are highly data hungry, often re- quiring millions of observations to reach acceptable performance levels.In addition, biases in data collection can substantially affect both performance and generalizability. Lactate might be a good predictor of the risk of death, for example, but only a small, nonrepresentative sample of patients have their lactate levels checked.”

    
Predicting the Future — Big Data, Machine Learning, and Clinical Medicine
Obermeyer Z, Emanuel EJ
N Engl J Med 375;13 September 29, 2016
  • “Machine learning has become ubiquitous and indispensable for solving complex problems in most sciences. In astronomy, algorithms sift through millions of images from telescope surveys to classify galaxies and find supernovas.”


    Predicting the Future — Big Data, Machine Learning, and Clinical Medicine
Obermeyer Z, Emanuel EJ
N Engl J Med 375;13 September 29, 2016
  • “Increasingly, the ability to transform data into knowledge will disrupt at least three areas of medicine. First, machine learning will dramatically improve the ability of health professionals to es- tablish a prognosis. Current prognostic models (e.g., the Acute Physiology and Chronic Health Evaluation [APACHE] score and the Sequential Organ Failure Assessment [SOFA] score) are restricted to only a handful of vari- ables, because humans must enter and tally the scores. But data could instead be drawn directly from EHRs or claims databases, allow- ing models to use thousands of rich predictor variables.”


    Predicting the Future — Big Data, Machine Learning, and Clinical Medicine
Obermeyer Z, Emanuel EJ
N Engl J Med 375;13 September 29, 2016


  • Canadian Association of Radiologists White Paper on Artificial Intelligence in Radiology
An Tang et al. 
Canadian Association of Radiologists Journal 69 (2018) 120e135


  • Canadian Association of Radiologists White Paper on Artificial Intelligence in Radiology
An Tang et al. 
Canadian Association of Radiologists Journal 69 (2018) 120e135
  • “One of the first and most significant hurdles to getting a CPT code is the need for peer-reviewed research in the United States that demonstrates both the efficacy and safety of the procedure. The second hurdle is the need for the procedure to be widely performed by a large number of physicians in the United States. These two requirements will prevent many AI software programs from achieving a CPT code. But, let us presume that at least one AI tool makes the cut and gets a CPT code. It will then have to be valued by the Relative Value Scale Update Committee (RUC) to get assigned RVUs.”


    Artificial Intelligence: Who Pays and How?
Schoppe, Kurt
Journal of the American College of Radiology (in press)
  • “The RUC values the professional component of a medical procedure based upon the work of a physician. The primary components of physi- cian work include the time it takes to perform the service, the level of technical skill required, and the mental effort and judgment necessary. For most AI tools I have seen, there is minimal to no physician work. Some AI processes run in the background and “prioritize” CT scans based on characteristics that may indicate an emergent finding. There is no physician work in this. Some AI processes may highlight specific imaging findings for the radiologist. This type of operation would be considered similar to computer-aided detection, and so would be valued similarly to prior CPT codes for computer-aided detection used in chest radiographs or mammography, though much of this work is either unreimbursed or bundled into the actual diagnostic procedure (eg, mammography and breast MRI).”

    
Artificial Intelligence: Who Pays and How?
Schoppe, Kurt
Journal of the American College of Radiology (in press)
  • “My opinion is that neither the government nor private payers will reimburse physicians and hospitals for using AI-driven software products. I believe that we will all purchase AI tools and treat them as an unreimbursed business expense. We will invest in AI software to ensure we are delivering high- quality work, to increase our efficiency, and to simplify clerical type tasks. In this way, paying for AI tools will merely be a cost of doing business like other operational expenses we incur.”


    Artificial Intelligence: Who Pays and How?
Schoppe, Kurt
Journal of the American College of Radiology (in press)
  • ”Artificial intelligence (AI) is rapidly moving from an experimental phase to an implementation phase in many fields, including medicine. The combination of improved availability of large datasets, increasing computing power, and advances in learning algorithms has created major performance breakthroughs in the development of AI applications. In the last 5 years, AI techniques known as deep learning have delivered rapidly improving performance in image recognition, caption generation, and speech recognition. Radiology, in particular, is a prime candidate for early adoption of these techniques. It is anticipated that the implementation of AI in radiology over the next decade will significantly improve the quality, value, and depth of radiology’s contribution to patient care and population health, and will revolutionize radiologists’ workflows.”

    
Canadian Association of Radiologists White Paper 
on Artificial Intelligence in Radiology
An Tang et al. 
Canadian Association of Radiologists Journal 69 (2018) 120e135
  • “In conclusion, with the current fast pace in development of machine learning techniques, and deep learning in particular, there is prospect for a more widespread clinical adoption of machine learning in radiology practice. Machine learning and artificial intelligence are not expected to replace the radiologists in the foreseeable future. ese techniques can potentially facilitate radiology work ow, increase radiologist productivity, improve detection and interpretation of findings, reduce the chance of error, and enhance patient care and satisfaction.”


    Current Applications and Future Impact of Machine Learning in Radiology 
Garry Choy et al.
 Radiology 2018; 00:1–11
  • “Radiomics is a process designed to extract a large number of quantitative features from radiology images . Radiomics is an emerging field for machine learning that allows for conversion of radiologic images into mineable high-dimensional data. For instance, Zhang et al evaluated over 970 radiomics features extracted from MR images by using machine learning methods and correlated with features to predict local and distant treatment failure of advanced nasopharyngeal carcinoma.”


    Current Applications and Future Impact of Machine Learning in Radiology 
Garry Choy et al.
 Radiology 2018; 00:1–11
  • “Machine learning approaches to the interrogation of a wide spectrum of such data (sociodemographic, imaging, clinical, laboratory, and genetic) has the potential to further personalize health care, far beyond what would be possible through imaging applications alone. Precision medicine require the use of novel computational techniques to harness the vast amounts of data required to discover individualized disease factors and treatment decisions.”


    Current Applications and Future Impact of Machine Learning in Radiology 
Garry Choy et al.
 Radiology 2018; 00:1–11
  • AI in Healthcare

  • AI in Healthcare

  • “Deep learning is a type of representation learning in which the algorithm learns a composition of features that re ect a hierarchy of structures in the data. Complex representations are expressed in terms of simpler representations.”
Deep Learning: A Primer for Radiologists.”

    
Chartrand G et al.
RadioGraphics 2017; 37:2113–2131
  • “Although neural networks have been used for decades, in re- cent years three key factors have enabled the training of large neural networks: (a) the availability of large quantities of la- beled data, (b) inexpensive and powerful parallel computing hardware, and (c) improvements in training techniques and architectures.”
Deep Learning: A Primer for Radiologists.”


    Chartrand G et al.
RadioGraphics 2017; 37:2113–2131
  • “Deep CNNs exploit the compositional structure of natural images so that shifts and deformations of objects in the images do not significantly affect the overall performance of the network.”
Deep Learning: A Primer for Radiologists.”


    Chartrand G et al.
RadioGraphics 2017; 37:2113–2131
  • “The creation of these large databases of labeled medical images and many associated challenges will be fundamental to foster future research in deep learning applied to medical images.”
Deep Learning: A Primer for Radiologists.”


    Chartrand G et al.
RadioGraphics 2017; 37:2113–2131
© 2019 Elliot K. Fishman, MD, FACR
All Rights Reserved.
www.CTISUS.com
CTisus CT Scanning CTisus CT Scanning CTisus CT Scanning CTisus CT Scanning CTisus CT Scanning CTisus CT Scanning CTisus CT Scanning CTisus CT Scanning CTisus CT Scanning