Methods:To address this gap in the deployment of agentic artificial intelligence (AI) in health care, we introduce MedAgentBench, a broad evaluation suite designed to assess the agent capabilities of LLMs within medical records contexts. MedAgentBench encompasses 300 patient-specific clinically derived tasks from 10 categories written by human physicians, realistic profiles of 100 patients with over 700,000 data elements, a Fast Healthcare Interoperability Resources�compliant interactive environment, and an accompanying codebase. The environment uses standard application programming interfaces and communication infrastructure used in modern electronic health record (EHR) systems so that it can be easily migrated into live EHR systems.
Results:MedAgentBench presents an unsaturated agent-oriented benchmark at which current state-of-the-art LLMs exhibit some ability to succeed. The best model (Claude 3.5 Sonnet v2) achieves a success rate of 69.67%. However, there is still substantial room for improvement, which gives the community a clear direction for future optimization efforts. Furthermore, there is significant variation in performance across task categories.
Conclusions:Agent-based task frameworks and benchmarks are the necessary next step to advance the potential and capabilities for effectively improving and integrating AI systems into clinical workflows. MedAgentBench establishes this and is publicly available at https://github.com/stanfordmlgroup/MedAgentBench, offering a valuable framework for model developers to track progress and drive continuous improvements in the agent capabilities of LLMs within the medical domain. (Funded by the NIH and Singapore�s National Science Scholarship [PhD].)