• Assisting, Replicating, or Autonomously Acting? An Ethical Framework for Integrating AI Tools and Technologies in Healthcare

    Aasim I Padela, Rwan Hayek, Aliya Tabassum, Fabrice Jotterand, Junaid Qadir
    Bioethics. 2025 Jul 18. doi: 10.1111/bioe.70019. Online ahead of print.

    Abstract

    Artificial intelligence (AI)-based technologies are increasingly being utilized, tested, and integrated into conventional healthcare delivery. Technological opportunities, ranging from machine-learning-based data analysis tools to large language model-based virtual healthcare assistants, offer significant potential to enhance healthcare access and improve outcomes. Researchers have discussed potential benefits, including improved resource allocation, diagnostic accuracy, and patient outcomes from greater AI integration in healthcare, and also have voiced concerns around data privacy, algorithmic bias, and diffused accountability. This paper adds to the literature by proposing an ethical framework that allows for both describing and normatively evaluating AI-mediated healthcare delivery based on its potential impact on human-centered patient care. Drawing upon Pelligrino's notions of the patient-doctor relationship, we propose a framework with two axes, one related to spectrum of patient engagement and the other related to the clinician's role, through which to assess the use of AI in healthcare. Technologies and tools that have minimal to no interaction with patients and primarily assist physicians in making clinical decisions tend to have the least ethical challenges. On the other hand, those that are fully patient-facing and work in parallel with doctors or autonomously in therapeutic or decisional roles are the most controversial, as they risk making healthcare less human-centric. As we advance toward more pervasive integration of AI in healthcare, our framework can facilitate upfront design and downstream implementation-related decisions.