Research News
Dec 16, 2025
- Medicine
Artificial superintelligence alignment in healthcare
Framework ensuring safe artificial superintelligence in healthcare
Inappropriate use of AI could pose potential harm to patients, so imperfect Swiss cheese frameworks align to block most threats.
Credit: Osaka Metropolitan University

The emergence of Artificial Superintelligence (ASI) in healthcare presents unprecedented opportunities for revolutionizing diagnostics, treatment planning, and population health management, but also introduces critical risks if these systems are not properly aligned with human values and clinical objectives.
An Osaka Metropolitan University-led research team conducted a review that examined the theoretical foundations of ASI and the alignment problem in healthcare contexts and explored how misaligned Artificial Intelligence (AI) systems could optimize for wrong objectives or pursue harmful strategies leading to patient harm and systemic failures. Current challenges in AI alignment are illustrated through real-world examples from radiology and clinical decision-making, where algorithms have demonstrated concerning biases, generalizability failures, and optimization for inappropriate proxy measures. The researchers analyzed key alignment challenges including objective complexity and technical pitfalls, bias and fairness issues in healthcare data, ethical integration concerns involving compassion and patient autonomy, and system-level policy challenges around regulation and liability. Technical alignment strategies are discussed, including reinforcement learning from human feedback, interpretability requirements, formal verification methods, and adversarial testing approaches. Normative alignment solutions encompass ethical frameworks, professional standards, patient engagement protocols, and multi-level governance structures spanning institutional, national, and international coordination.
The team emphasized that successful ASI alignment in healthcare requires combining cutting-edge AI research with fundamental medical ethics, noting that while proper alignment could enable transformative health improvements and medical breakthroughs, misalignment risks undermining the core purpose of medicine. The stakes of this alignment challenge are characterized as among the highest in both technology and ethics, with implications extending from individual patient safety to public trust and potentially existential risks.
Paper information
Journal: Japanese Journal of Radiology
Title: Artificial superintelligence alignment in healthcare
DOI: 10.1007/s11604-025-01907-1
Authors: Daiju Ueda, Shannon L. Walston, Ryo Kurokawa, Tsukasa Saida, Maya Honda, Mami Iima, Tadashi Watabe, Masahiro Yanagawa, Kentaro Nishioka, Keitaro Sofue, Akihiko Sakata, Shunsuke Sugawara, Mariko Kawamura, Rintaro Ito, Koji Takumi, Seitaro Oda, Kenji Hirata, Satoru Ide, Shinji Naganawa
Published: 14 November 2025
URL: https://doi.org/10.1007/s11604-025-01907-1
Contact
Daiju Ueda
Graduate School of Medicine
Email: ai.labo.ocu[at]gmail.com
*Please change [at] to @.
SDGs