Regulatory Focus™ > News Articles > 2021 > 11 > Industry, clinician groups have different wish lists for AI/ML-enabled device labels

Industry, clinician groups have different wish lists for AI/ML-enabled device labels

Posted 18 November 2021 | By Mary Ellen Schneider 

Industry, clinician groups have different wish lists for AI/ML-enabled device labels

Medical device industry groups are urging the U.S. Food and Drug Administration (FDA) not to rush to create new regulatory requirements around the labeling of medical devices that incorporate artificial intelligence or machine learning (AI/ML), while clinician groups are seeking greater transparency about device algorithms and training data sets.  
In total, 15 groups offered comments following a virtual public workshop held by FDA on the transparency of AI/ML-enabled medical devices held in mid-October. The meeting focused on ways to provide transparency through labeling and other public-facing information, including the possibility of a label styled after the nutrition facts on food labels for these medical devices.
FDA has been working to formulate a plan for regulating AI/ML-enabled medical devices; in January 2021, the agency issued a five-part action plan that called for developing a proposed regulatory framework and guidance on a predetermined change control plan, supporting development of good ML practices to improve ML algorithms, fostering a patient-centered approach through transparency to users, developing methods to evaluate and improve ML algorithms, and advancing real-world performance monitoring pilots. (RELATED: FDA’s AI/ML action plan includes ‘tailored’ regulatory framework for SaMD, Regulatory Focus 18 January 2021)
Industry response
Several industry groups weighed in following the public workshop to support transparency within FDA’s existing regulatory frameworks for medical device labeling.
Patrick Hope, executive director of the Medical Imaging and Technology Alliance (MITA), said that the medical imaging industry already follows a rigorous risk management process to identify what information should be provided in labeling or highlighted to the user in the instruction manual. “Labelling information does not generally include detailed product design characteristics or proprietary development information,” Hope wrote in comments to FDA.
AI devices should not automatically trigger new or additional regulatory oversight, said Ralph F. Hall, principal at Leavitt Partners and an advisor to the Artificial Intelligence in Medical Imaging Coalition (AIMIC). FDA should instead base its regulation on considerations about product risks and benefits, he said.
The Advanced Medical Technology Association (AdvaMed) said FDA’s “current robust labeling framework” would be an effective mechanism for communicating information on the safe and effective use of medical devices with AI/ML. “Any unique needs for AI/ML can be incorporated into this existing FDA labeling framework to ensure that labeling is consistent, clear and understandable. We also encourage FDA, to the extent possible, to align with the expectations of international regulators and related bodies, users and patients, and healthcare providers,” wrote Zachary A. Rothstein, AdvaMed’s senior vice president for technology and regulatory affairs.
Regarding the concept of a “Nutrition Facts” style label, AdvaMed said there should be standardized types of information in the labeling, but that a single format might not be practical because of the wide variety of product types from diagnostic to therapeutic.
“Requiring a single labeling format that requires specific structured information would not be appropriate given the varying ways in which performance is evaluated and the need to provide significant context for interpreting that information,” Rothstein wrote in comments to the FDA. “Additionally, standardized approaches to labeling these devices will need to account for new technology advances that will undoubtedly come quickly.”
AdvaMed encouraged FDA to instead develop a core set of information types to include in AI/ML labeling that could include performance evaluation data, limitations of use, and intended use within current clinical workflow.
Clinicians call for transparency
The American College of Radiology (ACR) commented that providers need to be able to access product labeling and performance data on AI/ML-enabled devices to make better decisions about technology acquisition and the overall trustworthiness of the AI/ML in clinical use.
Specifically, ACR called for disclosure of the performance testing dataset characteristics for population demographics, including age, sex, race, and ethnicity; number of facilities; acquisition devices, including manufacturer and model; and source of ground truth. “Such data could even be made accessible in a public database that allows searching by multiple parameters,” wrote Howard B. Fleishon, chair of the ACR’s Board of Chancellors.
ACR also highlighted concerns about reporting software underperformance to the agency. FDA’s Medical Device Reporting (MDR) mechanism would fail to capture instances of software underperformance if they do not result in an adverse event, Fleishon explained. “As none of the available AI/ML-enabled radiology software has been authorized for autonomous or unsupervised use, instances of poor model performance are typically recognized and corrected at the point of service by the radiologist before leading to patient harm,” he wrote.
ACR called on FDA to establish an alternative mechanism for compiling performance concerns from users of AI/ML-enabled devices, even if they do not trigger traditionally reportable adverse events. Those reports should be accessible to other providers and consumers via a national registry, for instance, ACR suggested.
The American College of Surgeons (ACS) echoed the need for providers to have greater access to information on the device’s algorithm training data. “If the device were developed with data sets that are not representative of the care setting, geographic region, race, ethnicity, age, gender, etc. of the patient population(s) the physician serves, there is potential for serious inaccuracies in the device outputs, which could be potentially harmful to patients,” wrote David B. Hoyt, MD, the ACS executive director.
ACS suggested FDA officials consider two types of labels for AI/ML-enabled medical devices that would account for the sophistication of the user. Some users would require detailed information on the clinical validation, training data and population, and development and testing setting. However, this level of detail could be confusing for other users, ACS noted.  
Public comments on transparency of AI/ML-enabled medical devices


© 2021 Regulatory Affairs Professionals Society.

Regulatory Focus newsletters

All the biggest regulatory news and happenings.