rf-fullcolor.png

 

July 5, 2023
by Jeff Craven

Study: Some devices claim AI/ML capabilities in marketing not listed by 510(k) clearance summaries

Some developers of software-enabled medical devices are making statements about artificial intelligence or machine learning (AI/ML) capabilities in marketing materials that do not appear in the device’s 510(k) clearance summaries, according to a recent study published in JAMA Network Open.
 
Among medical devices with similar software components to the US Food and Drug Administration’s (FDA) published list of artificial intelligence or machine learning (AI/ML) enabled medical devices, about one fifth of devices have “contentious” or “discrepant” statements in marketing materials, Phoebe Clark, MS, of Biomedical Informatics at NYU Langone Health in New York, and colleagues said.
 
“The FDA’s function is to verify that there is a reasonable assurance that the devices are safe and effective for their intended uses, something consumers cannot verify on their own,” Clark and colleagues wrote. “Discrepancies between what consumers see in device marketing vs what the FDA considers safe are hard for consumers to reconcile. Clearance from the FDA implies endorsement of safety and effectiveness, and it is logical to assume potential consequences to this disconnect.”
 
The researchers performed a systematic review of 119 medical devices between November 2021 and March 2022, analyzing the devices’ 510(k) clearance summarizes with their marketing materials. They evaluated devices based on how closely their marketing materials followed the 510(k) clearance summaries, grouping devices into adherent, contentious and discrepant categories.
 
Clark and colleagues defined an adherent device as one that mentioned AI or ML in the clearance summary or contained no mention of AI or ML capabilities but “the available marketing information for these devices echoed the sentiments of the FDA approval summary.” Contentious devices were defined as neither having AI or ML capabilities as identified by FDA or the 510(k) clearance summary, but marketing materials referenced AI- or ML-adjacent capabilities such as “smart and predictive analytics or modeling” without directly referencing the device was AI/ML enabled. Marketing materials for a discrepant device stated that the device had AI or ML capabilities, but was not on the agency’s list of AI/ML-enabled devices or did not have AI/ML capability mentioned in the clearance summary.
 
Overall, the researchers found 96 devices (84.03%) were adherent, 15 devices (12.61%) were categorized as discrepant and 8 devices (6.72%) were categorized as contentious. There were significant differences between the adherent, discrepant and contentious categories for radiological devices (P < .001) and cardiovascular devices (P < .001). Of the devices analyzed, 75 devices (82.35%) were in the jurisdiction of the radiological advisory committee, and 62 radiological devices (82.67%) were adherent, 10 devices (13.33%) were categorized as discrepant and 3 devices (4.0%) were categorized as contentious. For the 23 devices (19.33%) that were in the jurisdiction of the cardiovascular advisory committee, 19 devices (82.61%) were adherent, 2 devices were considered discrepant (8.70%) and 2 were categorized as contentious (8.70%).
 
Devices in other categories, such as those under the jurisdiction of anesthesiology, general and plastic surgery, pathology, dental and orthopedic advisory committees, had no devices that were considered adherent, the researchers said. “A 0% adherence rate is likely not indicative of the oversight of a committee but instead showing the emergence of this type of device in that field,” Clark and colleagues noted.
 
The researchers offered several potential explanations for why this issue might occur, such as lack of a format for 510(k) submissions among developers of devices, outdated guidance documents and increased development of AI and ML technologies.
 
“Further qualitative analysis and investigation into these devices and their certification methods may shed more light on the subject, but any level of discrepancy is important to note for consumer safety,” Clark and colleagues said. “The aim of this study was not to suggest developers were creating and marketing unsafe or untrustworthy devices but to show the need for study on the topic and more uniform guidelines around marketing of software-heavy devices.”
 
In an invited commentary, Nigam H. Shah, MBBS, PhD, of the Clinical Excellence Research Center and department of medicine at Stanford University in Palo Alta, Calif., said that while a majority of devices in the study were inherent, among those with discrepant claims in marketing materials, “the inconsistencies have disquieting implications.”
 
One potential issue is developers learning there is no penalty for making misleading or supported statements in marketing statements, as “it is difficult for the FDA to effectively police product marketing, even for products like drugs over which it unquestionably has regulatory jurisdiction.”
 
The potential for unsafe prescribing and inflated costs of care resulting from drug manufacturers marketing drugs for off-label indications “can be massively profitable and difficult for the government to stop,” they said, and it “would be unfortunate if AI- and ML-enabled devices went down the same path.”
 
Disclosure that a device uses AI or ML is an important issue for patient or clinician trust of a medical device as well as for organizations to understand how to safely deploy that device, Shah and colleagues explained.
 
“To fully fathom the implications of the discrepancies that Clark and colleagues identify, it is necessary to examine the decisions that would be made (and the action that would be taken or withheld) based on the AI- and ML-enabled devices’ output. In essence, we need to evolve our unit of examination from the model to the model plus the care workflow it drives,” they said. “The study by Clark and colleagues does not venture this deep into the field, and the FDA is constrained in its ability to fully assess the risks and benefits of AI- and ML-enabled devices in context. Without full and accurate disclosures of how devices work, device adopters and monitors, too, are hamstringed in critical ways.”
 
JAMA Netw Open Clark et al
×

Welcome to the new RAPS Digital Experience

We have completed our migration to a new platform and are pleased to introduce the updated site.

What to expect: If you have an existing login, please RESET YOUR PASSWORD before signing in. After you log in for the first time, you will be prompted to confirm your profile preferences, which will be used to personalize content.

We encourage you to explore the new website and visit your updated My RAPS page. If you need assistance, please review our FAQ page.

We welcome your feedback. Please let us know how we can continue to improve your experience.