Imaging Pearls ❯ Deep Learning ❯ ChatGPT
-- OR -- |
|
- “ChatGPT is able to generate coherent research articles, which on initial review may closely resemble authentic articles published by academic researchers. However, all of the articles we assessed were factually inaccurate and had fictitious references. It is worth noting, however, that the articles generated may appear authentic to an untrained reader.’
A comparison of ChatGPT‑generated articles with human‑written articles
Sisith Ariyaratne et al
Skeletal Radiology 2023 (in press) - “Our study had several limitations. We used a relatively small sample size, analyzing only 5 articles generated by ChatGPT. We also used version 3.0, which may have certain limitations, including ability to generate accurate information. Further studies analyzing a larger number of articles with more advanced versions of the AI software would ultimately be needed to definitively assess its reliability in generating scientific articles and could be a topic for future research.”
A comparison of ChatGPT‑generated articles with human‑written articles
Sisith Ariyaratne et al
Skeletal Radiology 2023 (in press) - “The use of ChatGPT and other related AI technology in nurse education is expected to continue to grow as technology advances and students and educators will become more comfortable with its use. However, it is important to note that while AI technology can enhance teaching and learning, it should not replace human interaction and support. Nurse educators and students should be mindful of the limitations of AI technology and ensure that it is used in conjunction with other teaching methods to provide holistic nurse education.”
Chatting or cheating? The impacts of ChatGPT and other artificial intelligence language models on nurse education
Edmond Pui Hang Choi et al.
Nurse Education Today 125 (2023) 105796 - “While there is certainly potential for ChatGPT to enhance the teaching and learning experience, there are also concerns about its impact on students' critical thinking and clinical reasoning skills. To understand the impacts of ChatGPT on nurse education, more empirical research is needed to investigate: (i) the impact of ChatGPT on student learning outcomes, such as critical thinking, clinical reasoning and knowledge acquisition; (ii) the role of ChatGPT in nurse educators' teaching and how it affects their workload, teaching practices and student engagement; and (iii) the ethical considerations and implications of using ChatGPT in nurse education.”
Chatting or cheating? The impacts of ChatGPT and other artificial intelligence language models on nurse education
Edmond Pui Hang Choi et al.
Nurse Education Today 125 (2023) 105796 - “Nurse educators should teach students when it is appropriate to use ChatGPT, how to critically appraise the contents generated by it and how to avoid over-reliance on it (Mhlanga, 2023). Nurse educators should help students develop critical and independent thinking skills to evaluate the validity, appropriateness and relevance of the information provided by ChatGPT. For example, they can teach students to consider potential biases and cross-validate information using reputable sources. Nursing students should be encouraged to use multiple sources of information, such as textbooks, academic journals and clinical protocols and guidelines, in addition to ChatGPT as human-driven verification processes are indispensable (van Dis et al., 2023).”
Chatting or cheating? The impacts of ChatGPT and other artificial intelligence language models on nurse education
Edmond Pui Hang Choi et al.
Nurse Education Today 125 (2023) 105796 - “In this cross-sectional study, a chatbot generated quality and empathetic responses to patient questions posed in an online forum. Further exploration of this technology is warranted in clinical settings, such as using chatbot to draft responses that physicians could then edit. Randomized trials could assess further if using AI assistants might improve responses, lower clinician burnout, and improve patient outcomes.”
Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum
JohnW. Ayers et al.
JAMA Intern Med. doi:10.1001/jamainternmed.2023.1838 - Question Can an artificial intelligence chatbot assistant, provide responses to patient questions that are of comparable quality and empathy to those written by physicians?
Findings In this cross-sectional study of 195 randomly drawn patient questions from a social media forum, a team of licensed health care professionals compared physician’s and chatbot’s responses to patient’s questions asked publicly on a public social media forum. The chatbot responses were preferred over physician responses and rated significantly higher for both quality and empathy.
Meaning These results suggest that artificial intelligence assistants may be able to aid in drafting responses to patient questions.
Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum
JohnW. Ayers et al.
JAMA Intern Med. doi:10.1001/jamainternmed.2023.1838 - “ChatGPT10 represents a new generation of AI technologies driven by advances in large language models. ChatGPT reached 100 million users within 64 days of its November 30, 2022 release and is widely recognized for its ability to write near-human-quality text on a wide range of topics. The system was not developed to provide health care, and its ability to help address patient questions is unexplored. We tested v ability to respond with high-quality and empathetic answers to patients’ health care questions, by comparing the chatbot responses with physicians’ responses to questions posted on a public social media forum.”
Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum
JohnW. Ayers et al.
JAMA Intern Med. doi:10.1001/jamainternmed.2023.1838 - “While this cross-sectional study has demonstrated promising results in the use of AI assistants for patient questions, it is crucial to note that further research is necessary before any definitive conclusions can be made regarding their potential effect in clinical settings. Despite the limitations of this study and the frequent overhyping of new technologies, studying the addition of AI assistants to patient messaging workflows holds promise with the potential to improve both clinician and patient outcomes.”
Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum
JohnW. Ayers et al.
JAMA Intern Med. doi:10.1001/jamainternmed.2023.1838
Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum
JohnW. Ayers et al.
JAMA Intern Med. doi:10.1001/jamainternmed.2023.1838- “We should be clear-eyed about the risks inherent to any new technology, especially one that carries existential implications. And yet, I am cautiously optimistic about a future of improved health care system efficiency, better patient outcomes, and reduced burnout; a future where AI enables us to get back to the reason why we decided to pursue medicine in the first place—to get up from the computer and back to the bedside.”
Medicine in the Era of Artificial Intelligence Hey Chatbot,Write Me an H&P
TevaD. Brender,
JAMA Internal Medicine Published online April 28, 2023 - “However, my excitement is tempered by a healthy dose of skepticism. For instance, consider the example of more analog technology. Despite their initial promise, the effect of medical scribes on health care quality, patient satisfaction, and physician productivity and burnout has been decidedly mixed. One might counter that, leveraging the power of big data, AI’s potential is limitless. Nevertheless, we should remain open to the eventuality that, like medical scribes, AI will similarly underdeliver, or that its implementation in health care might be slower and the initial use cases more circumscribed than the proponents hope.”
Medicine in the Era of Artificial Intelligence Hey Chatbot,Write Me an H&P
TevaD. Brender,
JAMA Internal Medicine Published online April 28, 2023 - “Finally, these programs are not sentient, they simply use massive amounts of text to predict one word after another, and their outputs may mix truth with patently false statements called hallucinations. As such, physicians will need to learn how to integrate these tools into clinical practice, defining clear boundaries between full, supervised, and proscribed autonomy. Just as I do not routinely count the little boxes when determining a heart rate, instead trusting the computer-generated electrocardiogram report, I always meticulously scrutinize the waveform before activating the catheterization laboratory for an ST-elevation myocardial infarction.
Medicine in the Era of Artificial Intelligence Hey Chatbot, Write Me an H&P
Teva D. Brender
JAMA Internal Medicine Published online April 28, 2023 - “A generative pretrained transformer(GPT) is an AI tool that produces text resembling human writing ,allowing users to interact with AI almost as if they are communicating with another person. The sudden rise in popularity of LLMs was driven largely by GPT-3, OpenAI’s third iteration, which was called the fastest growing app of all time and the most innovative LLM. People use GPT by entering prompts—text instructions in the form of questions or commands. Creating effective AI prompts is an art as much as a science, and the possibilities seem endless. One can use GPT like a search engine. However, GPT’s predictive algorithms can also answer questions that have never been posed.”
AI-Generated Medical Advice-GPT and Beyond.
Haupt CE, Marks M.
JAMA. 2023 Apr 25;329(16):1349-1350. - “For clinicians, GPT can potentially ease burnout by taking on repetitive tasks. MIT could provide clinical decision support and be incorporated into electronic medical record platforms like Epic. GPT might augment or replace frequently used resources like UpToDate. In theory, physicians could enter patient information into the software and ask for a differential diagnosis or preliminary treatment plan. However, current versions of GPT are not HIPAA compliant and could jeopardize patient privacy. Until professional grade versions with adequate safeguards are available, clinicians should avoid inputting protected health information.”
AI-Generated Medical Advice-GPT and Beyond.
Haupt CE, Marks M.
JAMA. 2023 Apr 25;329(16):1349-1350. - “With respect to AI-generated medical advice, as with other innovations, we suggest focusing on relevant social relationships and how the technology affects them. If clinicians use LLMs to aid decision- making, they function like other medical resources or tools. However, using AI to replace human judgment poses safety risks to patients and may expose clinicians to legal liability. Until its accuracy and reliability are proven, GPT should not replace clinician judgment. Although clinicians are not responsible for harms caused by consumer-facing Olmsted should educate patients about the risks. They might also advocate for FTC regulation that protects patients from false or misleading AI-generated medical advice.”
AI-Generated Medical Advice-GPT and Beyond.
Haupt CE, Marks M.
JAMA. 2023 Apr 25;329(16):1349-1350. - “When reliable LLMs do surface, they may well be found among specialized systems rather than generalist systems like ChatGPT. The problem of nontransparent and indiscriminate information sourcing is tractable, and market innovations are already emerging as companies develop LLM products specifically for clinical settings. These models focus on narrower tasks than systems like ChatGPT, making validation easier to perform. Specialized systems can vet LLM outputs against source articles for hallucination, train on electronic health records, or integrate traditional elements of clinical decision support software. Some medical informatics researchers are more sanguine than others about the prospects for specialized systems to outperform generalist models. As evidence continues to emerge, medical informatics researchers will have an important role to play in helping physicians understand the current situation of the specialized systems.”
ChatGPT and Physicians’ Malpractice Risk
Michelle M. Mello, JD, PhD, MPhil; Neel Guha, MS
JAMA Health Forum. 2023;4(5):e231938. - “At their current stage, LLMs have a tendency to generate factually incorrect outputs (called hallucination). The potential to mislead physicians is magnified by the fact that most LLMs source information no transparently. Typically, no list of references is provided by which a physician may evaluate the reliability of the information used to generate the output. When references are given, they are often insufficient or unsupportive of the generated output (if not entirely fabricated).”
ChatGPT and Physicians’ Malpractice Risk
Michelle M. Mello, JD, PhD, MPhil; Neel Guha, MS
JAMA Health Forum. 2023;4(5):e231938. - “ChatGPT has exploded into the national consciousness. The potential for large language models (LLMs) such as ChatGPT, Bard, and many others to support or replace humans in a range of areas is now clear—and medical decisions are no exception.1 This has sharpened a perennial medicolegal question: How can physicians incorporate promising new technologies into their practice without increasing liability risk? The answer lawyers often give is that physicians should use LLMs to augment, not replace, their professional judgment.2 Physicians might be forgiven for finding such advice unhelpful. No competent physician would blindly follow model output. But what exactly does it mean to augment clinical judgment in a legally defensible fashion?”
ChatGPT and Physicians’ Malpractice Risk
Michelle M. Mello, JD, PhD, MPhil; Neel Guha, MS
JAMA Health Forum. 2023;4(5):e231938. - “ChatGPT, for example, was trained on a great deal of data from the Internet. As the quality of responses depends on the quality of the training data, ChatGPT at times produces unsettling responses. It also “hallucinates” by providing answers that are not derivable from the training data. Importantly, GAI models can be iteratively “fine-tuned” with additional domain-specific training data. Applied appropriately, fine-tuning can produce specialized models that are more accurate and predictable and thus safer for specific use cases.”
Getting Generative AI Right
Samuel Aronson,Ted W. Lieu, Benjamin M. Scirica
NEJM April 2023 DOI: 10.1056/CAT.23.0063 - “The Centers for Medicare & Medicaid Services (CMS) should establish reimbursement policies to incentivize the use of time-saving, validated GAI capabilities. The CMS Innovation Center (CMMI) could be used to expedite the testing of new GAI reimbursement models before these models are broadly rolled out. Many billing codes tie compensation to a certain number of minutes per case rather than to quality and efficiency. As a counterbalance, billing codes for use of GAI in specific circumstances could create a powerful incentive for private financing of GAI capabilities. CMS also could consider sponsoring a new GAI competition.”
Getting Generative AI Right
Samuel Aronson,Ted W. Lieu, Benjamin M. Scirica
NEJM April 2023 DOI: 10.1056/CAT.23.0063 - Us: Will governments, clinicians, and technologists be supportive?
ChatGPT: There is increasing recognition among clinicians, technologists, and governments of the potential benefits of deploying generative AI to assist with administrative and documentation tasks in health care, and of the need to focus on more complex clinical decision support uses. However, there may also be concerns around ethical data use, patient privacy, and the need for ongoing education and training to ensure that this technology is used effectively and responsibly. As such, it will be important for stakeholders to work together to address these challenges and support the responsible deployment of generative AI in health care.
Getting Generative AI Right
Samuel Aronson,Ted W. Lieu, Benjamin M. Scirica
NEJM April 2023 DOI: 10.1056/CAT.23.0063 - AI in Clinical Practice: Barriers
Reimbursement challenges
Physician over reliance on AI
Potential issues with bias in AI algorithms (usually unintentional)
Liability issues
Patient comfort with AI assisted medicine
Black box nature of AI
- “A chatbot consists of two main components: a general-purpose AI system and a chat interface. This article considers specifically an AI system called GPT-4 (Generative Pretrained Transformer 4) with a chat interface; this system is widely available and in active development by OpenAI, an AI research and deployment company. To use a chatbot, one starts a “session” by entering a query — usually referred to as a “prompt” — in plain natural language. Typically, but not always, the user is a human being. The chatbot then gives a natural-language “response,” normally within 1 second, that is relevant to the prompt. This exchange of prompts and responses continues throughout the session, and the overall effect is very much like a conversation between two people”
Benefits, Limits, and Risks of GPT-4as an AI Chatbot for Medicine
Peter Lee, Ph.D., Sebastien Bubeck, Ph.D., and Joseph Petro, M.S.
n engl j med 388;13 nejm.org March 30, 2023 - “A false response by GPT-4 is sometimes referred to as a “hallucination,”6 and such errors can be particularly dangerous in medical scenarios because the errors or falsehoods can be subtle and are often stated by the chatbot in such a convincing manner that the person making the query may be convinced of its veracity. It is thus important to check or verify the output of GPT-4.”
Benefits, Limits, and Risks of GPT-4as an AI Chatbot for Medicine
Peter Lee, Sebastien Bubeck, Joseph Petro
n engl j med 388;13 nejm.org March 30, 2023 - “GPT-4 was not programmed for a specific “assigned task” such as reading images or analyzing medical notes. Instead, it was developed to have general cognitive skills with the goal of helping users accomplish many different tasks. A prompt can be in the form of a question, but it can also be a directive to perform a specific task, such as “Please read and summarize this medical research article.” Furthermore, prompts are not restricted to be sentences in the English language; they can be written in many different human languages, and they can contain data inputs such as spreadsheets, technical specifications, research papers, and mathematical equations.”
Benefits, Limits, and Risks of GPT-4as an AI Chatbot for Medicine
Peter Lee, Sebastien Bubeck, Joseph Petro
n engl j med 388;13 nejm.org March 30, 2023 - Even though GPT-4 was trained only on openly available information on the Internet, when it is given a battery of test questions from the USMLE,11 it answers correctly more than 90% of the time. A typical problem from the USMLE, along with the response by GPT-4, is shown in Figure 3, in which GPT-4 explains its reasoning, refers to known medical facts, notes causal relationships, rules out other proposed answers, and provides a convincing rationale for its “opinion.”
Benefits, Limits, and Risks of GPT-4as an AI Chatbot for Medicine
Peter Lee, Sebastien Bubeck, Joseph Petro
n engl j med 388;13 nejm.org March 30, 2023 - “This knowledge of medicine makes GPT-4 potentially useful not only in clinical settings but also in research. GPT-4 can read medical research material and engage in informed discussion about it, such as briefly summarizing the content, providing technical analysis, identifying relevant prior work, assessing the conclusions, and asking possible follow-up research questions.”
Benefits, Limits, and Risks of GPT-4as an AI Chatbot for Medicine
Peter Lee, Sebastien Bubeck, Joseph Petro
n engl j med 388;13 nejm.org March 30, 2023 - “Perhaps the most important point is that GPT-4 is not an end in and of itself. It is the opening of a door to new possibilities as well as new risks. We speculate that GPT-4 will soon be followed by even more powerful and capable AI systems — a series of increasingly powerful and generally intelligent machines. These machines are tools, and like all tools, they can be used for good but have the potential to cause harm. If used carefully and with an appropriate degree of caution, these evolving tools have the potential to help health care providers give the best care possible.”
Benefits, Limits, and Risks of GPT-4as an AI Chatbot for Medicine
Peter Lee, Sebastien Bubeck, Joseph Petro
n engl j med 388;13 nejm.org March 30, 2023 - “It is important to understand that this is a fast-moving field, so to some extent, what we publish may have the resolution of a snapshot of the landscape taken from a bullet train. Specifically, things happening in close temporal proximity to publication may be blurred because they are changing quickly, but the distant background will be in reasonably good focus. ”
Artificial Intelligence and Machine Learning in Clinical Medicine, 2023
Charlotte J. Haug, Jeffrey M. Drazen
N Engl J Med 2023;388:1201-8. - “A chatbot is a computer program that uses AI and natural-language processing to understand questions and automate responses to them, simulating human conversation. A very early medical chatbot, ELIZA, was developed between 1964 and 1966 by Joseph Weizenbaum at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology.”
Artificial Intelligence and Machine Learning in Clinical Medicine, 2023
Charlotte J. Haug, Jeffrey M. Drazen
N Engl J Med 2023;388:1201-8. - “The application of greatest potential and concern is the use of chatbots to make diagnoses or recommend treatment. A user without clinical experience could have trouble differentiating fact from fiction. Both these issues are addressed in the article by Lee and colleagues, who point out the strengths and weaknesses of using chatbots in medicine. Since the authors have created one such entity, bias is likely. Nevertheless, we think that chatbots will become important tools in the practice of medicine. Like any good tool, they can help us do our job better, but if not used properly, they have the potential to do damage. Since the tools are new and hard to test with the use of the traditional methods noted above, the medical community will be learning how to use them, but learn we must. There is no question that the chatbots will also learn from their users.”
Artificial Intelligence and Machine Learning in Clinical Medicine, 2023
Charlotte J. Haug, Jeffrey M. Drazen
N Engl J Med 2023;388:1201-8. - “Nevertheless, we think that chatbots will become important tools in the practice of medicine. Like any good tool, they can help us do our job better, but if not used properly, they have the potential to do damage. Since the tools are new and hard to test with the use of the traditional methods noted above, the medical community will be learning how to use them, but learn we must. There is no question that the chatbots will also learn from their users.”
Artificial Intelligence and Machine Learning in Clinical Medicine, 2023
Charlotte J. Haug, Jeffrey M. Drazen
N Engl J Med 2023;388:1201-8. - “We firmly believe that the introduction of AI and machine learning in medicine has helped health professionals improve the quality of care that they can deliver and has the promise to improve it even more in the near future and beyond. Just as computer acquisition of radiographic images did away with the x-ray file room and lost images, AI and machine learning can transform medicine. Health professionals will figure out how to work with AI and machine learning as we grow along with the technology. AI and machine learning will not put health professionals out of business; rather, they will make it possible for health professionals to do their jobs better and leave time for the human–human interactions that make medicine the rewarding profession we all value.”
Artificial Intelligence and Machine Learning in Clinical Medicine, 2023
Charlotte J. Haug, Jeffrey M. Drazen
N Engl J Med 2023;388:1201-8.
- “Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability.”
Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge.
Flanagin A, Bibbins-Domingo K, Berkwits M, Christiansen SL.
JAMA. Published online January 31, 2023. doi:10.1001/jama.2023.1344 - “In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT. ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.”
Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge.
Flanagin A, Bibbins-Domingo K, Berkwits M, Christiansen SL.
JAMA. Published online January 31, 2023. doi:10.1001/jama.2023.1344 - Nonhuman artificial intelligence, language models, machine learning, or similar technologies do not qualify for authorship. If these models or tools are used to create content or assist with writing or manuscript preparation, authors must take responsibility for the integrity of the content generated by these tools. Authors should report the use of artificial intelligence, language models, machine learning, or similar technologies to create content or assist with writing or editing of manuscripts in the Acknowledgment section or the Methods section if this is part of formal research design or methods. This should include a description of the content that was created or edited and the name of the language model or tool, version and extension numbers, and manufacturer. (Note: this does not include basic tools for checking grammar, spelling, references, etc.)
Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge.
Flanagin A, Bibbins-Domingo K, Berkwits M, Christiansen SL.
JAMA. Published online January 31, 2023. doi:10.1001/jama.2023.1344 - “Transformative, disruptive technologies, like AI language models, create promise and opportunities as well as risks and threats for all involved in the scientific enterprise. Calls for journals to implement screening for AI-generated content will likely escalate,especially for journals that have been targets of paper mills and other unscrupulous or fraudulent practices. But with large investments in further development, AI tools may be capable of evading any such screens. Regardless, AI technologies have existed for some time, will be further and faster developed, and will continue to be used in all stages of research and the dissemination of information, hopefully with innovative advances that offset any perils. In this era of pervasive misinformation and mistrust, responsible use of AI language models and transparent reporting of how these tools are used in the creation of information and publication are vital to promote and protect the credibility and integrity of medical research and trust in medical knowledge.”
Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge.
Flanagin A, Bibbins-Domingo K, Berkwits M, Christiansen SL.
JAMA. Published online January 31, 2023. doi:10.1001/jama.2023.1344 - “This exploratory study found that a popular online AI model provided largely appropriate responses to simple CVD prevention questions as evaluated by preventive cardiology clinicians. Findings suggest the potential of interactive AI to assist clinical workflows by augmenting patient education and patient-clinician communication around common CVD prevention queries. For example, such an application may provide conversational responses to simple queries on informational platforms or create automated draft responses to patient electronic messages for clinicians. Whether these approaches can improve readability should be explored, because prior work has indicated low readability of certain online patient educational materials for CVD prevention.”
Appropriateness of Cardiovascular Disease Prevention Recommendations Obtained From a Popular Online Chat-Based Artificial Intelligence Model.
Sarraju A, Bruemmer D, Van Iterson E, Cho L, Rodriguez F, Laffin L.
JAMA. Published online February 03, 2023. doi:10.1001/jama.2023.1044 - “AI model responses to 21 of 25 questions (84%) were graded as appropriate in both contexts (Table). Four responses (16%) were graded as inappropriate in both contexts. For 3 of the 4 sets of responses, all 3 responses had inappropriate information; for 1 set, 1 of 3 responses was inappropriate. For example, the AI model responded to questions about exercise by firmly recommending both cardiovascular activity and lifting weights, which may be incorrect and potentially harmful for certain patients. Responses about interpreting a low-density lipoprotein cholesterol level of 200 mg/dL lacked relevant details, including familial hypercholesterolemia and genetic considerations. Responses about inclisiran suggested that it is commercially unavailable. No responses were graded as unreliable.”
Appropriateness of Cardiovascular Disease Prevention Recommendations Obtained From a Popular Online Chat-Based Artificial Intelligence Model.
Sarraju A, Bruemmer D, Van Iterson E, Cho L, Rodriguez F, Laffin L.
JAMA. Published online February 03, 2023. doi:10.1001/jama.2023.1044