Published: 10/13/2025
Global conflicts, infectious diseases, natural disasters driven by climate change, and increases in the number of refugees worldwide, are magnifying the need for humanitarian services at a time of increasingly constrained humanitarian resources.
In a new Perspective piece in the New England Journal of Medicine, three Stanford Center for Innovation in Global Health affiliates explore potential uses of artificial intelligence to aid humanitarian responses to disaster and conflict — emphasizing the need for caution and guardrails to ensure its ethical adoption.
“With the defunding of USAID, humanitarian services have become truly constrained and consideration of creative workarounds with AI can be life-saving,” said Dr. Michele Barry, MD, senior associate dean of global health, and the lead author. “However, it’s imperative that humanitarian agencies, policymakers, and AI developers account for the substantial technical and financial challenges to the widespread adoption of such tools in low-resourced settings, as well as ethical concerns about testing and deploying new technologies in sensitive humanitarian contexts. We hope this commentary shines a light on both the possibilities and risks of using AI in these settings.”
“AI is rapidly transforming healthcare, with the potential for profound advances in care, but with the equally daunting potential for mis-use, particularly among populations for whom data is lacking, as is often the case in settings of humanitarian conflicts,” said co-author Gary Darmstadt, MD, MS, associate dean for maternal and child health, and professor of neonatal and developmental medicine in the Stanford Department of Pediatrics. “While the use of AI in humanitarian settings offers significant opportunity, ensuring accountability and transparency is essential in these use cases.
Jamie Hansen, global health communications manager at the Stanford Center for Innovation in Global Health, was also a co-author.
Read the full commentary here.