rf-fullcolor.png

 

November 11, 2025
by Jeff Craven

Medical groups make recommendations for FDA regulation of genAI mental health devices

Representatives from the American Psychological Association and American Psychiatric Association offered the US Food and Drug Administration (FDA) a set of recommendations for how to think about regulating generative artificial intelligence (AI)-enabled digital mental health medical devices in two presentations at the agency’s Digital Health Advisory Committee (DHAC) meeting last week.
 
FDA said the focus of the 6 November meeting was to assess the potential for using generative AI-based technologies, particularly digital mental health medical devices, to diagnose and treat psychiatric conditions.
 
The DHAC meeting was notable for a series of FDA discussion questions on regulatory considerations for a hypothetical future AI-enabled therapy chatbot that could diagnose and treat major depressive disorder in children and adults that greatly concerned the convened panel of experts and, at times, left the panel speechless. (RELATED: FDA questions on genAI-enabled chatbots raise concerns from expert panel, Regulatory Focus 10 November 2025)
 
Earlier in the meeting, Vaile Wright, senior director in the office of health care innovation at the American Psychological Association, said that the mental health crisis in the US is at an intersection of a workforce shortage, resulting in many patients being unable to access evidence-based care. Although telehealth has helped somewhat to bridge this gap, Wright said, but there remains an unmet need in this area.
 
Direct-to-consumer apps are another piece of this puzzle, which “play a critical role in helping people understand how to engage in better healthy living, but again, they're not replacements for psychotherapy,” Wright said.
 
What’s more promising is the emerging landscape of prescription digital therapeutics that “deliver evidence-based, clinically validated psychological treatments,” she said. This solution would have an established bar of safety and efficacy since it meets the requirements for software as a medical device and require a prescription from a licensed provider.
 
“I think the challenge, however, is this gray space in between these two products, where you have products that are maybe marketing themselves one way for intended use, but are actually being used in a different way,” she explained.
 
For instance, one recent survey showed 48.7% of US respondents with at least one mental health condition used a large language model for psychological support within the last year, Wright said.
“Whether that’s to address anxiety, personal advice or depression, it's clear that people are seeking out this technology resource as a way to address uncertainty in their lives and potentially to even engage in some reassurance seeking about why they're struggling the way that they are,” she said.
 
Wright said she believes that FDA-regulated mental health chatbots are a likely future, as well as prescription digital therapeutics that use generative AI for engagement. “Unfortunately, that’s not really what's on the market currently,” she said.
 
One contributing factor to this market environment is a lack of regulation that can account for the innovation that occurs in the AI field, which results in lack of reimbursement, data privacy concerns, health infrastructure issues, and a lack of trust and understanding by the public and providers, Wright said.
 
On behalf of the American Psychological Association, she urged policymakers to use existing regulatory frameworks to “address the realities of AI and mental healthcare.” FDA should also modernize regulations surrounding stated intent of technologies and actual use, and strengthen requirements around pre-market evidence and post-marketing monitoring for digital tools. Gaps in agency oversight should be bridged with a new interagency approach to evaluate safety and validity of digital mental health tools using generative AI.
 
“I think at a minimum, we need to require that developers and others follow the existing FDA transparency guidelines as a baseline expectation to address gaps in transparency,” she said. Digital tools should have “clear, enforceable standards” regardless of whether they are apps intended for entertainment, direct-to-consumer, or digital therapeutics.
 
“[M]ost critically, from my perspective, I think we need to establish a public repository of products that have received FDA clearance—so, those that are prescription digital therapeutics—so that providers and consumers know exactly what products have been deemed safe and effective, as well as those operating under enforcement discretion,” Wright said. “It’s a very unclear space, and so we need clear guidance for both public and developers about what it means and does not mean to fall under enforcement discretion.”
 
“I think this would really go a long way to improving transparency and helping everybody understand exactly what the space looks like,” she said.
 
Oversight of gen AI in mental health
 
In a separate presentation later in the meeting, Brooke Trainum, senior director, practice policy at the American Psychiatric Association, echoed Wright’s concerns about parasocial relationships between patients and generative AI chatbots.
 
“[P]roviders need to be aware of how to talk to their patients about these models,” she said, but the physician cannot be the only oversight for these products.
“There is a role for regulators, and there needs to be more restrictions on these models,” Trainum said. “But there is a role for physicians and being able to discuss how AI tools being used in the clinical space, including the privacy and transparency considerations, the intended uses and training model data, and how it's used safely when under the care of the provider or used outside of the therapeutic relationship.”
 
When implementing generative AI-based products, physicians encounter a unique set of challenges, and bear the greatest safety and liability risk in the absence of additional oversight and regulatory guidance. Instead, developers should share some of that responsibility, she noted.
 
“Regulators must shift the responsibility back to developers before these tools hit the market,” she explained. “Developers must be accountable for the safety, accuracy and ethical use of these models, not just the providers.”
 
Trainum also pointed out there is currently no high-quality evidence that current generative AI models are effective at delivering mental health. “We need larger sample size with appropriate controls and standardized metrics or benchmarks to properly determine the efficacy of generative AI applications in mental health care,” she explained. “We need to specifically evaluate the technical development, clinical and ethical components, and how they need to be applied in the legal and regulatory structures.
 
“Research must center on the human experience rather just than on the capabilities of the technology,” she added.
 
On behalf of the American Psychiatric Association, Trainum recommended FDA implement requirements for standardized labeling for all AI tools. “Information such as model identifiers, indications for use, validation studies, training data, transparency, limitations and warnings, privacy and security, and contact information need to be available,” she said.
 
Another recommendation is that FDA guidance on AI should have a patient-centered design and evaluation, and models should be developed in tandem with mental health experts and diverse patient groups. “Current models are trained and evaluated on small non-diverse populations, and we cannot apply those findings more broadly at this point,” Trainum said.
 
High risk tools need to have clinical oversight, and the public should be prevented from accessing these tools when the risk is too high. Continued research into safety and efficacy is also needed as well as guidance on post-marketing surveillance, including guidance on unintended consequences surrounding the use of AI tools.
 
“AI has the potential to help us achieve gains in clinical practice, and we must make those gains, but we must have safeguards for the lowest risk models, such as administrative tasks, and restrict implementation at this time for high-risk models that are causing harm,” Trainum said. “AI tools require human oversight.”
 
Meeting
×

Welcome to the new RAPS Digital Experience

We have completed our migration to a new platform and are pleased to introduce the updated site.

What to expect: If you have an existing login, please RESET YOUR PASSWORD before signing in. After you log in for the first time, you will be prompted to confirm your profile preferences, which will be used to personalize content.

We encourage you to explore the new website and visit your updated My RAPS page. If you need assistance, please review our FAQ page.

We welcome your feedback. Please let us know how we can continue to improve your experience.