Imaging Pearls ❯ Deep Learning ❯ ChatGPT
-- OR -- |
|
- “Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability.”
Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge.
Flanagin A, Bibbins-Domingo K, Berkwits M, Christiansen SL.
JAMA. Published online January 31, 2023. doi:10.1001/jama.2023.1344 - “In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT. ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.”
Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge.
Flanagin A, Bibbins-Domingo K, Berkwits M, Christiansen SL.
JAMA. Published online January 31, 2023. doi:10.1001/jama.2023.1344 - Nonhuman artificial intelligence, language models, machine learning, or similar technologies do not qualify for authorship. If these models or tools are used to create content or assist with writing or manuscript preparation, authors must take responsibility for the integrity of the content generated by these tools. Authors should report the use of artificial intelligence, language models, machine learning, or similar technologies to create content or assist with writing or editing of manuscripts in the Acknowledgment section or the Methods section if this is part of formal research design or methods. This should include a description of the content that was created or edited and the name of the language model or tool, version and extension numbers, and manufacturer. (Note: this does not include basic tools for checking grammar, spelling, references, etc.)
Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge.
Flanagin A, Bibbins-Domingo K, Berkwits M, Christiansen SL.
JAMA. Published online January 31, 2023. doi:10.1001/jama.2023.1344 - “Transformative, disruptive technologies, like AI language models, create promise and opportunities as well as risks and threats for all involved in the scientific enterprise. Calls for journals to implement screening for AI-generated content will likely escalate,especially for journals that have been targets of paper mills and other unscrupulous or fraudulent practices. But with large investments in further development, AI tools may be capable of evading any such screens. Regardless, AI technologies have existed for some time, will be further and faster developed, and will continue to be used in all stages of research and the dissemination of information, hopefully with innovative advances that offset any perils. In this era of pervasive misinformation and mistrust, responsible use of AI language models and transparent reporting of how these tools are used in the creation of information and publication are vital to promote and protect the credibility and integrity of medical research and trust in medical knowledge.”
Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge.
Flanagin A, Bibbins-Domingo K, Berkwits M, Christiansen SL.
JAMA. Published online January 31, 2023. doi:10.1001/jama.2023.1344 - “This exploratory study found that a popular online AI model provided largely appropriate responses to simple CVD prevention questions as evaluated by preventive cardiology clinicians. Findings suggest the potential of interactive AI to assist clinical workflows by augmenting patient education and patient-clinician communication around common CVD prevention queries. For example, such an application may provide conversational responses to simple queries on informational platforms or create automated draft responses to patient electronic messages for clinicians. Whether these approaches can improve readability should be explored, because prior work has indicated low readability of certain online patient educational materials for CVD prevention.”
Appropriateness of Cardiovascular Disease Prevention Recommendations Obtained From a Popular Online Chat-Based Artificial Intelligence Model.
Sarraju A, Bruemmer D, Van Iterson E, Cho L, Rodriguez F, Laffin L.
JAMA. Published online February 03, 2023. doi:10.1001/jama.2023.1044 - “AI model responses to 21 of 25 questions (84%) were graded as appropriate in both contexts (Table). Four responses (16%) were graded as inappropriate in both contexts. For 3 of the 4 sets of responses, all 3 responses had inappropriate information; for 1 set, 1 of 3 responses was inappropriate. For example, the AI model responded to questions about exercise by firmly recommending both cardiovascular activity and lifting weights, which may be incorrect and potentially harmful for certain patients. Responses about interpreting a low-density lipoprotein cholesterol level of 200 mg/dL lacked relevant details, including familial hypercholesterolemia and genetic considerations. Responses about inclisiran suggested that it is commercially unavailable. No responses were graded as unreliable.”
Appropriateness of Cardiovascular Disease Prevention Recommendations Obtained From a Popular Online Chat-Based Artificial Intelligence Model.
Sarraju A, Bruemmer D, Van Iterson E, Cho L, Rodriguez F, Laffin L.
JAMA. Published online February 03, 2023. doi:10.1001/jama.2023.1044