rf-fullcolor.png

 

June 20, 2025
by Ferdous Al-Faruque

FDA: If used for document libraries, Elsa cannot hallucinate; unlikely to be connected to the Internet

Editor's note: The story and headline were updated on 23 June to provide additional context on the design of Elsa.
 
WASHINGTON – The US Food and Drug Administration (FDA) plans to make significant updates to its recently launched artificial intelligence (AI) tool Elsa within the next month and has taken critical steps to ensure that it does not hallucinate like other AI tools, agency officials said at a town hall session at the DIA Global Annual Meeting.
 
FDA launched Elsa earlier in June to mixed reviews from agency staff. While some said the software was able to provide accurate responses to their regulatory questions, others noted that the AI was trained on data only up to April 2024. (RELATED: FDA’s Elsa AI tool gets mixed response from some staff, Regulatory Focus, 4 June 2025)
 
FDA Chief AI Officer Jeremy Walsh said that the agency is quickly updating Elsa and plans to add capabilities within the next 30 days. Beyond that, the agency is also working to add new features.
 
Walsh emphasized that FDA has taken measures to prevent Elsa from hallucinating.
 
"There are certain parts of the system the way it's designed so that when you're working on documents it forces citations,” said Walsh. “It can't hallucinate, it's not allowed to come up with figments of its imagination."
 
He said FDA has provided training, guidance, and pop-up messaging to ensure that agency staff verify the information they get from Elsa. He noted that when staff are working on documents, the system provides citations so that staff can validate Elsa’s output.
 
"These are really brilliant people, they know when something's coming up that's not right,” he said. “Also, the workstreams that people work on go through multiple layers of individuals. There's not really a chance that something's going to be pushed out by the FDA that hasn't been reviewed by a lot of eyes."
 
Walsh later clarified to Focus that Elsa is designed not to hallucinate if used as intended.
 
"If users are utilizing Elsa against document libraries and it was forced to cite documents, it can't hallucinate," he said. "If users are just using the regular model without using document libraries, Elsa could hallucinate just like any other large language model."
 
Elsa currently does not have direct access to the Internet, and allowing it to have access could pose security challenges, according to Walsh. He described scenarios where the system could accidentally share proprietary information after being connected to the Internet and given sensitive agency documents.
 
"I don't know if Elsa will ever be able to have real-time access to the internet,” said Walsh. “None of our models, especially Elsa, are being exposed or open to the Internet. That’s a big security risk.”
 
Ultimately, Walsh said that FDA’s aim is to move toward a real-time regulatory environment. He said that could mean receiving, reviewing, and approving applications in real time. He also added that it could mean conducting inspections, surveillance, and ensuring compliance in real time.
 
"Our goal is to look at the way that we are doing these functions and to figure out how we move it toward real time," said Walsh. "I think the AI tool we recently released, Elsa, is a great example of that.”
 
As an example of how AI tools could be used to regulate in real time, Walsh said that the fastest way they've been able to output studies from FDA's Sentinel Initiative, which tracks safety reports for regulated products, is 3-4 weeks. Using AI tools, he said, they can potentially develop those reports in real time, improving postmarket and premarket surveillance.
 
Using AI tools isn’t new to FDA, Walsh said. Before Elsa was launched, staff in different parts of the agency tested AI tools and found certain capabilities could help accelerate the premarket application review process. As a result, he said, agency officials decided to "democratize" AI tools so that all staff could benefit from those capabilities.
 
As FDA realized that it needed to adopt AI tools, Walsh said the agency also realized it needed a new and better infrastructure to support the growing use of the technology. To develop that infrastructure, he said the agency ensured that all the data they used for Elsa and the solutions they developed complied with Federal Information Security Modernization Act (FISMA) high-impact requirements. He also said the AI models were not being trained on the data, even though some stakeholders in the industry have asked that they do so. He said that it may be a possibility worth discussing later.
 
Since Elsa was released, Walsh said that, on average, around 6,000 agency staffers have used it weekly in different ways, such as in the review process, for administrative tasks, and to redact information from real-world data (RWD) adverse events reports so they can be shared publicly.
 
"Every employee at FDA has access to these types of tools, and they're using them to help better their job function, to make their job easier, more efficient, and we're seeing huge value out of that," said Walsh.
×

Welcome to the new RAPS Digital Experience

We have completed our migration to a new platform and are pleased to introduce the updated site.

What to expect: If you have an existing login, please RESET YOUR PASSWORD before signing in. After you log in for the first time, you will be prompted to confirm your profile preferences, which will be used to personalize content.

We encourage you to explore the new website and visit your updated My RAPS page. If you need assistance, please review our FAQ page.

We welcome your feedback. Please let us know how we can continue to improve your experience.