The FDA plans to roll out its new AI tool, called Elsa, “ahead of schedule and under budget,” according to Commissioner Dr. Marty Makary, who offered up the most information to date about the agency’s new tool and AI goals this week. But questions remain.
After completing a pilot program of AI-assisted scientific reviews last month, the FDA announced it was taking an “aggressive” approach to scale up the use of Elsa agencywide by June 30, but this week’s progress has moved up the timeline. In a Monday video announcement, Makary described how Elsa will modernize the agency to help employees, including scientific reviewers and investigators, reduce non-productive busywork.
“The agency is using Elsa to expedite clinical protocol reviews and reduce the overall time to complete scientific reviews,” Makary said. “One scientific reviewer told me what took him two to three days now takes six minutes.”
Other tasks performed by Elsa included “summarizing adverse events to support safety profile assessments, conducting expedited label comparisons and generating code to facilitate the development of databases for nonclinical applications.”
The launch has been criticized by some FDA employees as “rushed,” possibly to account for significant personnel cuts at the agency enacted by the Department of Government Efficiency over the last several months, STAT News reported.
Makary also attempted to address security concerns in the announcement, noting the language model-powered AI tool wasn’t trained on data submitted by the industry and data is held within a secure “GovCloud environment.” The agency provided few other details regarding how Elsa was trained and how much it’s been tested.
Questions remain
The FDA will plan to focus Elsa’s scope on administrative tasks such as summarizing documents and data extraction. This is the right approach, according to Panna Sharma, CEO and president of Lantern Pharma, an AI biopharma.
“These are exactly the applications where current generative AI … completely excels and where we see similar gains in drug development and scientific review internally at Lantern Pharma,” he said. “The key will be maintaining this performance as the system scales across different review types, medicines, diseases, and with differing or incomplete background information or literature.”
The agency noted that Elsa’s introduction will be just “the initial step in the FDA’s overall AI journey,” with the potential to add more tasks like data processing down the line. In May, the FDA also said it would refine its features over time and gather feedback.
“The timeline is aggressive, especially given that the scope and rigor of the initial pilot testing wasn't fully showcased, but this reflects the urgency FDA feels."

Panna Sharma
CEO, Lantern Pharma
“The big question I have focuses on the continuous learning and improvement side of things. How regularly will the FDA audit some of those AI outputs and help identify certain error patterns or maybe hallucinations or problematic prompts that might require retraining? ” said Dr. Chase Feiger, CEO and co-founder of Ostro, an AI company that works with many of the largest pharmas.
One area of concern is transparency. Will pharmas know if AI was used in their reviews, for example? If an AI-generated analysis leads to a rejection or delay, sponsors may face the challenge of “limited visibility into how much AI influenced the decision, raising concerns about opaque or unvalidated reasoning,” wrote lawyers from Hogan Lovells in a May blog post.
Other legal experts see the potential for AI tools to have too much influence in review decisions.
“If sponsors are aware that AI was used in scientific review of their applications, the agency’s use of AI could become a topic in future appeals, requests for supervisory review, or Formal Dispute Resolution Requests following unfavorable decisions on premarket applications,” King & Spalding lawyers wrote in a recent client alert.
A handful of regulatory and data experts peppered the FDA’s LinkedIn post of the video announcement with questions about the more nuanced details of the tool, alongside many singing praises of the agency’s modernization efforts.
“So many questions,” wrote Dr. Hugh Harvey, founder and managing director of U.K.-base Hardian Health, a software and AI medical device consultancy firm. “[W]hat verification and validation was done. Where is the public info? The FDA expects industry to do these things, so why not them?”
“They should release a summary of how they developed and validated this AI, including how they used all the internal filings and submissions to build their algorithms,” suggested Troy Trboyevich, director of regulatory affairs at Philips.
AI is here
Even without a detailed explanation, Elsa’s rollout shows the FDA is pursuing an internal AI strategy regardless of outside opinion.
The FDA has been under pressure to create clear regulations for AI in the pharma industry for a few years and released a draft guidance in January to show how the agency will assess the risk of AI models in drug development. While it offered the first look at a possible framework for how pharmas using AI will be evaluated by regulators, many AI experts were left wanting more details. The first steps provided in the guidance also underscored how the industry is still in the early stages of AI adoption.
“I find it very exciting as well as surprising that the FDA is putting their foot on the gas in terms of helping roll out AI internally within the agency, because it actually aligns quite nicely with the adoption speed that we're seeing within the pharmaceutical manufacturing side of the table,” Feiger said.
And for AI life sciences companies, Elsa’s debut may be a source of validation.
“The timeline is aggressive, especially given that the scope and rigor of the initial pilot testing wasn't fully showcased, but this reflects the urgency FDA feels about modernizing their review processes and doing more with less staff,” Sharma said.