The FDA's Digital Health Advisory Committee (DHAC) met on November 6, 2025 (video here) to explore how generative AI-enabled digital mental health medical devices should be regulated—a first-of-its-kind review in a rapidly expanding market projected to hit $11 billion in 2025.
The Big Picture
This meeting builds on DHAC's inaugural November 20-21, 2024 session. While the FDA has cleared many AI-enabled devices, it hasn't yet approved any generative AI-based tools for mental health use. DHAC's latest discussion signals the agency's intent to build a predictable, risk-based framework for these technologies that balances innovation with safety and effectiveness.
What DHAC said:
- Its mission remains to accelerate delivery of safe, effective device software through consistent, predictable regulation
- FDA will apply a risk-based, total product life cycle (TPLC) approach to regulating generative AI in mental health, aligning with its Draft Guidance on AI-Enabled Device Software Functions
- Higher-risk uses will face greater scrutiny and enforcement priority
Zooming out, FDA clarified that:
- General wellness apps (e.g., motivational tips) fall outside the medical device definition
- Low-risk mental health apps (e.g., anxiety-skill builders) may qualify for enforcement discretion
- Therapeutic apps for diagnosed psychiatric disorders will require premarket review and ongoing oversight
The Takeaway: The FDA is laying the groundwork for how it will oversee generative AI in mental health and beyond—with an emphasis on risk, life cycle management, and clarity for developers.
Key Risk Factors and Oversight Considerations
In assessing risk for generative AI–enabled digital mental-health therapeutics, DHAC considers both the potential benefits and risks. These considerations include:
- The probable benefits of this type of device that provides automated therapy, due to the device's ability to meaningfully treat a mental health condition or symptom
- The level and type of evidence demonstrating clinically significant outcomes under real-world conditions of use (e.g., as adjunctive therapy versus standalone therapy; as a prescription product versus over-the-counter use)
- Whether the device is used with or without healthcare provider oversight
- The likelihood and consequences of device misuse
Given these considerations—and the possibility of generative AI–enabled digital mental health medical devices encountering high-acuity situations without human supervision—DHAC repeatedly emphasized the need for reliable mechanisms to detect and escalate acute safety concerns, such as indications of suicidal ideation or self-harm, to ensure timely human intervention or other appropriate emergency responses.
Technical Vulnerabilities Unique to Generative AI in Mental Health
DHAC also identified two unique vulnerabilities posed by generative AI–enabled digital mental health technologies in mental health contexts:
- Sycophancy: A technical term describing the tendency of generative AI systems to produce responses that align with or please the user—even when those responses may be inaccurate or untrue
- Limitations in Metacognition: Meaning these systems may struggle to estimate the probability that a user's input does not fit any known category ("none of the above") and may lack sufficient semantic grounding to recognize malformed or ambiguous questions
Although some generative models show correlation between their reported certainty and the accuracy of their responses, this remains unreliable. Because mental health care is a domain where managing uncertainty and demonstrating epistemic humility are essential, these metacognitive limitations raise significant concerns about the safety and appropriate use of such systems in therapeutic contexts, their clinical validity, and the potential liability if they are deployed without clear safeguards. A Committee member also observed that the specific risks posed by generative AI–enabled mental-health devices are not yet fully agreed upon, though there is broad acknowledgment that the potential for serious harm—including death resulting from incorrect information—is among the most significant. The member emphasized the importance of developing a regulatory framework that can clearly define and quantify these risks so that manufacturers can demonstrate, with evidence, that their systems effectively minimize them before deployment.
Regulating a Moving Target: The Challenge of Model Evolution
The central regulatory challenge for generative AI–enabled digital mental health medical devices is how FDA should evaluate devices that change over time as the underlying models evolve. Because generative AI systems can update, adapt, or shift in unpredictable ways, FDA's predetermined change control plan (PCCP)—a framework that allows manufacturers to pre-authorize certain types of post-market modifications—becomes critical.
However, several foundational questions remain unresolved, including:
- What degree of specificity a PCCP can realistically provide given the adaptive nature of generative AI
- What boundaries or guardrails are needed to define the scope of automatic updates permitted for a device
- Which mechanisms are necessary to monitor post-market performance to ensure device performance is maintained or improved
- How labeling should be updated to inform users when modifications occur automatically
- What notification requirements should apply when a device does not function as intended
A further concern raised by the Committee was that companies might attempt to invoke FDA approval as a shield against scrutiny or potential litigation, even as their generative AI devices evolve in ways that are not reviewed or anticipated by the Agency, a dynamic the Committee cautioned strongly against.
Clinical Trials and Human Oversight Remain Critical
DHAC highlighted that addressing model evolution is only one part of the regulatory challenge; clinical trial design will also need to adapt to account for the unique characteristics of generative AI-enabled therapeutics, including performance variability over time. The Committee further underscored the continued importance of physician or other qualified human oversight in mental-health applications, given the potential risks of autonomous therapeutic interactions.
Next Steps for Stakeholders
The FDA has opened docket FDA-2025-N-2338 for public comment on this meeting, and has requested comment on approaches to measuring and evaluating the real-world performance of AI-enabled medical devices, under Docket No. FDA-2025-N-4203: "Measuring and Evaluating Artificial Intelligence-enabled Medical Device Performance in the Real World; Request for Public Comment."
Venable will continue to monitor all changes regarding the FDA and report any critical developments. If you have questions about how the contents of this article may impact you or your business's interactions with the FDA, please contact the authors today.