Jasmine Chiat Ling Ong, Pharm.D., Shelley Yin-Hsi Chang, M.D., Wasswa William, Ph.D., Atul J. Butte, M.D., Ph.D., Nigam H. Shah, M.B.B.S., Ph.D., Lita Sui Tjien Chew, M.Med.Sc., Nan Liu, Ph.D., Finale Doshi-Velez, Ph.D., Wei Lu, Ph.D., Julian Savulescu, M.D., Ph.D., and Daniel Shu Wei Ting, M.D., Ph.D.
Large language models (LLMs) have shown significant promise related to their application in medical research, medical education, and clinical tasks. While acknowledging their capabilities, we face the challenge of striking a balance between defining and holding ethical boundaries and driving innovation in LLM technology for medicine. We herein propose a framework, grounded in four bioethical principles, to promote the responsible use of LLMs. This model requires the responsible application of LLMs by three parties — the patient, the clinician, and the systems that govern the LLM itself — and suggests potential approaches to mitigating the risks of LLMs in medicine. This approach allows us to use LLMs ethically, equitably, and effectively in medicine.