The Learning Portal will be under maintenance Saturday, 8 August between 6 AM and 6 PM EST. Portal functionality will be unavailable during this window.
We apologize for any inconvenience caused during this time.

RAPS is closely monitoring developments in the Coronavirus (COVID-19) outbreak. See our public safety page for the latest updates.

 
Regulatory Focus™ > News Articles > 2020 > 7 > Radiologists to FDA: Autonomous AI not ready for prime time

Radiologists to FDA: Autonomous AI not ready for prime time

Posted 02 July 2020 | By Kari Oakes 

Radiologists to FDA: Autonomous AI not ready for prime time

Artificial intelligence is not ready for autonomy in radiology, according to two radiological professional associations who asked the US Food and Drug Administration (FDA) to wait for more rigorous testing and surveillance of the modality before authorizing its autonomous implementation in medical imaging.
 
In follow-up to a February 2020 workshop focused on artificial intelligence (AI) in medical imaging, the chairs of the American College of Radiology (ACR) and the Radiological Society of North America (RSNA) said in a joint letter that they have “some concerns with the approaches suggested at the workshop by a number of researcher/developer presentations with respect to FDA authorization pathways for autonomously functioning AI algorithms in medical imaging.”
 
The two organizations “believe it is unlikely FDA could provide reasonable assurance of the safety and effectiveness of autonomous AI in radiology patient care without more rigorous testing, surveillance, and other oversight mechanisms throughout the total product life cycle,” according to Howard B. Fleishon, MD, of ACR and Bruce G. Haffty, MD, of RSNA, who added that having AI perform autonomous image interpretation at a safe level “is a long way off.”
 
Fleishon and Haffty advocated for FDA to hold off on further approvals until supervised AI algorithms in current use have broader market penetrance, so the agency can reach a better understanding of the efficacy and safety of these systems. This information can be used by FDA to formulate both the premarket approval and post-market surveillance processes for autonomous AI, they said. 
 
Specifically, the letter calls for AI algorithms to be tested using multi-site heterogeneous data sets, “to ensure a minimum level of generalizability across diverse patient populations as well as variable imaging equipment and imaging protocols.” Postmarket oversight by FDA should make sure that AI algorithms are working as expected over the long term, and labeling should be clear about what equipment and protocols are validated for use with the AI, Fleishon and Haffty said.
 
The associations’ letter also referenced a 2019 discussion paper from FDA that proposes a regulatory framework for AI- and machine learning-based software as medical devices (SaMD). Some SaMD is “locked,” whereas others use machine learning techniques to be “continuously learning.” With regard to continuously adaptive algorithms, Fleishon and Haffty said, “we believe that without the safeguards provided by direct physician-expert oversight during each use in addition to a total product life cycle (TPLC) regulatory approach with comprehensive, real-time monitoring of deployed products, it may be infeasible for FDA to ensure the safety and effectiveness of continuously adaptive algorithms.”
 
The letter suggests that AI could currently best be used in radiology to assist with population health management by detecting and quantifying incidental findings that may be associated with chronic diseases.
 

Tags: devices, FDA, medical, US

Regulatory Focus newsletters

All the biggest regulatory news and happenings.

Subscribe