Global Regulators Set Next Steps for AI in Health
The World Health Organization (WHO) and the Ministry of Food and Drug Safety of the Republic of Korea co-hosted the 2025 AI Regulatory and International Symposium (AIRIS) in Incheon, bringing together regulators, researchers, and technology leaders to advance the responsible use of artificial intelligence in health.
Under the theme “Regulation for AI, Together for Tomorrow,” the symposium focused on developing frameworks to ensure AI systems are safe, ethical, and equitable across the medical product lifecycle. Participants from national regulatory agencies, industry, and academia highlighted how AI is transforming health research, diagnostics, and manufacturing—and why governance must evolve in parallel.
“As AI becomes more sophisticated and its health applications expand, so must our efforts to make them safe, effective, ethical, and equitable,” said Tedros Adhanom Ghebreyesus, PhD, MS, WHO director-general.
Regulators call for risk-based framework for AI governance
The AIRIS 2025 Outcome Statement identified several key priorities for AI governance:
- Lifecycle-based regulation: Oversight should extend from development through manufacturing, validation, and post-market monitoring
- Risk-proportionate standards: Frameworks should scale according to context—recognizing that low-risk AI tools may not require the same scrutiny as high-impact clinical systems
- International collaboration: Regulators called for deeper cross-border coordination to close policy gaps and create a trustworthy, transparent AI ecosystem
- Sustained global dialogue: AIRIS will continue as a recurring platform to guide international alignment on AI ethics, interoperability, and access
These recommendations build on progress from the inaugural AIRIS meeting in 2024 and reaffirm WHO’s commitment to advancing global governance for emerging technologies.
What evolving AI regulation means for laboratory compliance
For laboratory managers, the outcomes of AIRIS 2025 offer a preview of how evolving AI regulations may affect data-driven research and operations:
- Validation and documentation expectations may increase: AI-enabled lab software and instruments—such as digital pathology platforms, predictive maintenance tools, or automated image analysis systems—could face stricter testing and traceability requirements
- Data governance is becoming a compliance issue: Lifecycle oversight extends to how labs collect, label, and use data to train or verify AI models; ethical data handling and transparency will be essential to pass audits or meet regulatory expectations
- Equitable access and bias mitigation are emerging standards: As AI tools are adopted in diagnostics and clinical research, managers will need to ensure their systems are fair, interpretable, and accountable
By staying informed about regulatory trends and ensuring robust validation and recordkeeping practices, labs can better align with the coming generation of AI oversight.
Preparing labs for the next phase of AI oversight
The WHO and its partners plan to continue expanding AIRIS as a forum for collaboration among regulators, international organizations, and technical experts. For lab leaders, this evolving dialogue signals a shift toward shared global standards for digital integrity—where transparency, traceability, and accountability define the responsible use of AI in health and science.
This article was created with the assistance of Generative AI and has undergone editorial review before publishing.
link
