FDA announces AI rollout timeline and scientific review pilot completion

The FDA said its timing to scale use of AI inside its FDA centers follows the completion of a GenAI pilot for scientific review.

In what the agency called “a historic first,” FDA Commissioner Martin Makary announced an aggressive timeline to scale use of artificial intelligence (AI) internally across all FDA centers by June 30, 2025, following the completion of a new generative AI pilot for scientific reviewers.

The generative AI tools allow FDA scientists and subject-matter experts to spend less time on tedious, repetitive tasks that can slow down the review process.

Dr Makary has directed all FDA centers to begin deployment immediately, with the goal of full integration by the end of June. Work will continue to expand use cases, improve functionality and adapt to the evolving needs of each center after June 30, and by that date, all centers will be operating on a common, secure generative AI system integrated with the FDA’s internal data platforms.

“There have been years of talk about AI capabilities in frameworks, conferences and panels but we cannot afford to keep talking. It is time to take action. The opportunity to reduce tasks that once took days to just minutes is too important to delay,” said Dr Makary.

Next Steps

The FDA plans to expand generative AI capabilities across all centers using a secure, unified platform. Future enhancements will focus on improving usability, expanding document integration, and tailoring outputs to center-specific needs, while maintaining strict information security and compliance with FDA policy.

The agency-wide rollout is being coordinated by Jeremy Walsh, the FDA’s newly appointed Chief AI Officer, and Sridhar Mantha. Walsh previously led enterprise-scale technology deployments across federal health and intelligence agencies and Mantha recently led the Office of Business Informatics in the FDA’s Center for Drug Evaluation and Research.

The FDA said additional details and updates on the initiative will be shared publicly in June.

Prior FDA guidance

Early this year, the FDA issued two draft guidance documents discussing the use of AI to produce information to support a regulatory decision about a drug or biological product’s safety, effectiveness, or quality; and the development and marketing of safe and effective AI-enabled devices.

In the drug and biological products guidance, the FDA proposed a risk-based credibility assessment framework, including a seven-step process to establish and assess an AI model’s credibility for a specific use. The main risks that the agency was concerned with included;

  • bias and reliability problems due to variability in the quality, size, and representativeness of training datasets;
  • the black-box nature of AI models in their development and decision-making;
  • the difficulty of ascertaining the accuracy of a model’s output; and
  • the dangers of data drift and a model’s performance changing over time or across environments.

To illustrate how this framework can be applied, the guidance provides hypothetical examples in clinical development and commercial manufacturing. The guidance encourages sponsors using this framework to engage early with FDA and envisions interactive feedback.

In its second guidance document targeting medical devices, the FDA explains what documentation and information should be included in marketing submissions for devices with AI-enabled device software functions.

The guidance explains how sponsors should describe the AI-aspects of their devices in marketing submissions and identifies what sponsors of AI-enabled devices must disclose in the devices’ labels. The recommended disclosures for marketing submissions and labels require sponsors to provide detailed information about their devices’ uses, inputs, outputs, architecture, development, performance, installation, and maintenance, among others.

As noted by panelists at the Pharmaceutical Compliance Congress annual event late last month in McLean, Virginia, AI tech has the potential to transform healthcare by deriving new insights from the data generated during healthcare delivery; but the risks must be carefully monitored to ensure the challenges unique to AI applications and technology in general are well-understood and monitored effectively.