FDA and EMA Set the Direction: AI in Clinical Trials and Drug Development

Picture of Tigermed EMEA

On January 14, 2026, the two leading global medicines regulators — the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA) — took a major step forward in regulating and enabling the use of artificial intelligence (AI) across the pharmaceutical lifecycle. The agencies jointly published common principles of good practice for AI, covering research, clinical trials, manufacturing, and post-market surveillance. This initiative marks a paradigm shift in how regulators view AI and lays the foundation for future practical guidance.

 

What Was Announced: Joint FDA–EMA AI Principles

 

FDA and EMA agreed on a set of 10 “Good AI Practice” principles designed to guide the responsible use of AI in drug development, including clinical trials. These principles are intended as a strategic framework, rather than binding regulations, and apply across the entire medicinal product lifecycle — from early discovery to post-authorization monitoring.

 

- Key principles include:

 

  1. Human-centric by design — AI systems must keep patients and expert oversight at the center of decision-making.

  2. Risk-based approach — the level of validation and regulatory scrutiny should reflect the potential impact on patient safety and data integrity.

  3. Adherence to standards — AI models should follow recognized technical and scientific standards.

  4. Clear context of use — the clinical or regulatory purpose of each AI system must be clearly defined.

  5. Multidisciplinary expertise — development and oversight should involve clinical, technical, regulatory, and ethical experts.

  6. Robust data governance and documentation — transparency regarding data sources, model training, and assumptions is essential.

  7. Sound design and development practices — AI must be developed and tested using established engineering best practices.

  8. Risk-appropriate performance evaluation — testing requirements should align with the seriousness of the intended clinical use.

  9. Lifecycle management — AI systems require continuous monitoring, updating, and reassessment over time.

  10. Clear and accessible documentation — essential information must be understandable and available to regulators and stakeholders.

Why This Matters

 

Until now, regulatory expectations for AI in clinical research were fragmented across regions. This joint FDA–EMA initiative represents one of the most significant transatlantic alignments to date in the governance of AI for drug development.

 

The stated goal is to accelerate innovation while maintaining patient safety, data quality, and scientific integrity. The principles aim to support responsible adoption of AI in areas such as clinical trial design, patient recruitment, data analysis, safety monitoring, and regulatory decision-making.

 

For pharmaceutical and biotech companies, these principles provide a clearer signal of how AI-enabled tools may be assessed and accepted across both U.S. and European regulatory pathways.

 

AI in Clinical Trials: What Comes Next

 

The published principles are expected to serve as the foundation for future, more detailed regulatory guidance, likely to be released later this year by both agencies. These forthcoming documents are anticipated to outline technical expectations, validation requirements, and best practices for deploying AI in regulated clinical environments, including:

 

  • AI-driven patient identification and recruitment

  • Automated safety signal detection and monitoring

  • Predictive models to optimize trial design and reduce development timelines

  • Support tools for regulatory submissions and lifecycle decision-making

 

Risk, Safety, and Accountability

 

A key message from regulators is that AI will not operate autonomously in clinical trials. Human oversight remains essential, and accountability for decisions will continue to rest with qualified professionals. This approach aligns with broader international discussions on AI ethics, transparency, and trust in healthcare.

 

Conclusion

 

The publication of joint FDA–EMA principles for AI represents a defining moment for the pharmaceutical and clinical research ecosystem. While not yet legally binding, these principles offer a clear strategic compass for sponsors, CROs, technology developers, and regulators alike.

 

They signal a shared commitment to enabling responsible, transparent, and patient-focused innovation, ensuring that AI accelerates drug development without compromising safety or regulatory rigor.

ABOUT AUTHOR

SUBMIT YOUR COMMENT

Great updates

Subscribe to our email newsletter today!