Regulatory Focus™ > News Articles > 2022 > 3 > Report: Using RWD to evaluate AI-enabled clinical decision support tools

Regulatory NewsRegulatory News
Posted 21 March 2022 | By Mary Ellen Schneider 

Report: Using RWD to evaluate AI-enabled clinical decision support tools

2706 A new analysis from the Duke-Margolis Center for Health Policy outlines the data elements that real-world data (RWD) sources would need to capture in order to evaluate the performance of artificial intelligence (AI)-enabled clinical decision support tools, as well as the ongoing challenges related to data quality, privacy and security.
 
AI-enabled clinical decision support tools can be difficult to evaluate after deployment because of the non-standardized nature of electronic health record systems and continually changing clinical workflows. Using RWD to assess the clinical performance of these AI-enabled tools could allow evaluations to be done more efficiently, according to the Duke-Margolis white paper.
 
The US Food and Drug Administration (FDA) has expressed interest in RWD in the postmarket evaluation of AI-enabled clinical support tools, some of which the agency regulates as medical devices (“software as a medical device” or SaMD). In 2021, the agency released its action plan on AI/machine learning (ML) software, which signaled that RWD could potentially be used for assessing performance of SaMD technologies. (RELATED: FDA’s AI/ML action plan includes ‘tailored’ regulatory framework for SaMD, Regulatory Focus 18 January 2021)
 
Data elements
 
The white paper authors identified the following categories/data elements as generally required to evaluate software tool performance:
  • Model output: The model output would include the risk score, diagnosis or a suggestion action. The data elements would be based on the intended use and would potentially be located in the EHR or an ancillary EHR data system.
  • Comparison to output: The comparison to output is the observed outcome, such as rehospitalization or death. The data elements would be based on the intended use and would potentially be located in the EHR system, claims data or a registry.
  • Operational data: The operational data or model input could include medication administration, procedures, or patient actions. The data elements would vary based on the intended use and would potentially be located in the EHR system, claims data or a registry.
  • Demographic subgroup analysis variables: The data elements needed for demographic subgroup analysis could include age, sex and race; insurance status; and medical history. These could come from EHRs, claims data, registries or internal device data.
 
Regulators may also want to capture additional data if they are concerned with patient outcomes, according to the white paper. For example, RWD could be used to examine whether the health care providers acted on the algorithm’s recommendations or if there were factors confounding the algorithm performance, such as early action or additional treatments.

Challenges
 
One of the challenges in using RWD to assess AI-enabled clinical decision support tools is that the data needed may not always be available, it may not be of high enough quality, or it may be spread out among siloed data sources. The white paper authors called on policymakers and other stakeholders to address these issues, including the lack of data on patient health status and long-term outcomes, the lack of data on how physicians use algorithm outputs, inconsistent data across the continuum of a patient’s care, and systemic biases within data sources that make it difficult to find representative data.
 
Additionally, the privacy of patient data is protected through a patchwork of federal and state laws that can complicate data sharing by software developers, health systems, and the FDA. To ensure the flow of information, separate legislation may be needed spelling out when information can be shared.
 
“Such legislation, could, for example, take the form of future state healthcare facility licensure statutes requiring facilities that implement [clinical decision support] tools to supply RWD to developers to help them detect any problems. To protect patients’ privacy, those same statutes could set limits on how developers can use the RWD, for example, by imposing privacy standards or limiting downstream re-disclosures of RWD once it is in the developer’s hands,” the white paper authors wrote.
 
In the meantime, regulators and other stakeholders will need to ensure the security of cloud computing and storage platforms, the authors noted. These platforms should have appropriate security controls, be audited by third-party experts, and comply with existing security compliance frameworks.
 
The white paper recommendations come from a two-day virtual private workshop held in July 2020, a literature review, and informational calls. The project was funded by the Gordon and Betty Moore Foundation.
 
Duke-Margolis Center for Health Policy white paper

 

© 2022 Regulatory Affairs Professionals Society.

Discover more of what matters to you

11;18;20;27;