Saira Ghafur, M.B., Ch.B., M.R.C.P., M.Sc., Ryan Callahan, B.S.F.S., and David Blumenthal, M.D., M.P.P.
The rapid spread of artificial intelligence (AI) has prompted global interest in its use and regulation. International regulatory efforts offer opportunities for cross-national learning, but risk complicating innovation across national boundaries. In leading industrialized democracies, approaches to regulating health AI are just taking shape, vary significantly, and could change dramatically with shifts in national governments. Regulatory policies for pregenerative AI (PGAI) — which utilizes machine learning–based predictive analytics — are more established and consistent than those for generative AI — which encompasses foundation models and their derivative applications. This is because PGAI has been available for decades and has been managed under established regulatory pathways that have treated health AI as “software as a medical device.” Given the dependence of AI health care applications on patient data, existing and new regulations on access to such data will play a role in the development and use of health care AI. Cross-national harmonization of regulatory regimes is nascent and likely to be challenging, but approaches to regulating PGAI seem to be amenable to international alignment. Research on standardized metrics and evaluation frameworks for health care AI presents a promising avenue for international collaboration. (Funded by the Commonwealth Fund.)