ICMRA: Address artificial intelligence challenges with permanent working group

Regulatory NewsRegulatory News | 16 August 2021 |  By 

The rapid expansion and evolution of artificial intelligence (AI) will challenge the current regulatory pharmaceutical and device, according to a new horizon-scanning report from an international consortium of regulators. The ad hoc group recommends a permanent working group to stay abreast of the regulation of AI in the development and assessment of medicinal products.
A 6 August report from the International Coalition of Medicines Regulatory Authorities (ICMRA) detailed the outcomes of two case studies designed to serve as “stress tests” for member agencies. The case studies, wrote ICMRA authors, highlight the need to engage ethics experts as regulators and the pharmaceutical industry increase use of AI. The report also recommends continuing to build out a regulatory framework for AI that takes into account such factors as validity, data provenance, and the reliability and transparency of AI algorithms.
The report springs from an informal network for innovation made up of ICMRA member regulators from Italy, Denmark, Canada, Ireland, Switzerland, the European Medicines Agency (EMA) and the World Health Organization. The US Food and Drug Administration (FDA) participated in observer status.
“Opportunities to apply AI occur across all stages of a medicine’s lifecycle: from target validation and identification of biomarkers, to annotation and analysis of clinical data in trials, pharmacovigilance and clinical use optimization,” noted the report’s executive summary.
One significant challenge that spans the range of AI applications is the lack of transparency in algorithms that govern AI when machine learning techniques are employed. Machine learning can create a “black box” situation, where the exact processes by which results are produced are no longer directly observable, with the potential for clinical and regulatory conundrums. (RELATED: Radiologists to FDA: Autonomous AI not ready for prime time, Regulatory Focus 02 July 2020)
The first hypothetical case study worked up by the working group was an app designed for central nervous system-related data capture. The hypothetical app would assist in selecting patients for participation in clinical trials by recording and analyzing the baseline disease status of prospective participants. Also, the app would track adherence to trial interventions, response to therapies, and endpoints identified as changes in disease status; the working group also envisioned a post-approval role for the app.
This case study, with its ample “complexity and novelty,” made clear the importance of early advice from regulators during the product development process, according to the report. Medical device regulators and academic experts would likely be pulled into the process as scientific, ethical and legal considerations would need to be weighed alongside regulatory decisions.
Conformity assessment of such a complicated product might present significant challenges, the working group realized, spotting the “black box” problem. Even if regulators were given access to the algorithms and training and validation datasets used for AI development, full validation might not be possible; “more sophisticated approaches may be needed such as investigating machine behavior,” wrote the report authors.
Apps need to be updated, and bridging or validation studies could reveal a post-update change to the benefit-risk profile of the product, which could then trigger the need for an additional regulatory submission. Developers, whose duty it is to carry out these tests, should “ideally” have robust governance structures that monitor AI algorithms as they evolve with use.
The second case study envisions AI use in a pharmacovigilance setting. “In principle, AI systems appear suitable for the detection of safety signals,” according to the report, and show promise for reducing the “heavy manual component” that signal detection tools often still need to employ.
However, when AI takes a more prominent role in pharmacovigilance activities, “the challenge will lie in getting the balance right between AI and human oversight,” with the ongoing calculation of benefit and risk for each therapy.
The ability of AI to scan very large datasets and aggregate disparate information does show promise to tease out those safety signals that can be missed with methods currently in use, noted the report authors. Some of these include drug-drug and drug-disease interactions, secondary malignancies, drug misuse, and changes in patterns of drug use and of adverse events.
Here, the marketing authorization holder (MAH) needs to ensure that specialists in data quality and pharmacovigilance signal detection work alongside AI experts. If a third party is involved in running the AI component of the pharmacovigilance program, that party must assure the MAH and regulators that they will maintain and update the AI as necessary, and allow appropriate access to the AI tool.
The report outlined current AI activities and future strategies for several regulators, including Health Canada, Japan’s Pharmaceutical and Medical Devices Agency (PMDA), the EU’s European Medicines Agency and relevant European Commission agencies, Swissmedic, and Australia’s Therapeutic Goods Administration (TGA). Commonalities include a recognition of the potential of AI to contribute to pharmaceutical development and pharmacovigilance as well as to ease the work of regulators. However, the horizon scan revealed the critical need for regulatory science to stay abreast of rapid development in the field of AI, along with a need for a robust ethics framework.
ICMRA report


© 2022 Regulatory Affairs Professionals Society.

Tags: AI, EMA, ethics, EU, FDA, learning, machine, US

Discover more of what matters to you