Regulatory Focus™ > News Articles > 2021 > 10 > Regulators release 10 principles for good machine learning practice

Regulators release 10 principles for good machine learning practice

Posted 27 October 2021 | By Mary Ellen Schneider 

Regulators release 10 principles for good machine learning practice

Regulators from the US, Canada, and the United Kingdom unveiled 10 principles to guide the development of good machine learning practice for medical devices.
 
The principles are meant to be used to drive the adoption of good practices that have been proven in other sectors, to help tailor those practices so that they are applicable to medical technology, and to create new practices specific to the health care sector. The document, which was issued by the US Food and Drug Administration (FDA), Health Canada, and the UK’s Medicines & Healthcare products Regulatory Agency (MHRA), is aimed at informing the work of the International Medical Devices Regulators Forum (IMDRF) and other international standards organizations as they tackle regulation of a growing number of medical devices that incorporate machine learning and artificial intelligence.
 
“Artificial intelligence and machine learning technologies have the potential to transform health care by deriving new and important insights from the vast amount of data generated during the delivery of health care every day,” agency officials wrote. “They use software algorithms to learn from real-world use and in some situations may use this information to improve the product’s performance. But they also present unique considerations due to their complexity and the iterative and data-driven nature of their development.”
 
The guiding principles set out by the three agencies are:
  1. Multi-disciplinary expertise is leveraged throughout the total product life cycle, with understanding of how the model is meant to be integrated into the clinical workflow.
  2. Good software engineering and security practices are implemented, including data quality assurance, data management and cybersecurity practices.
  3. Clinical study participants and data sets are representative of the intended patient population so that results can be generalized to the population of interest.
  4. Training data sets are independent of test sets.
  5. Selected reference datasets are based upon best available methods.
  6. Model design is tailored to the available data and reflects the intended use of the device. Model design should support the mitigation of known risks such as overfitting, performance degradation, and security risks.
  7. Focus is placed on the performance of the human-artificial intelligence team, rather than the artificial intelligence model alone.
  8. Testing demonstrates device performance during clinically relevant conditions. Considerations include the intended patient population, key subgroups, the clinical environment, measurement inputs, and potential confounding factors.
  9. Users are provided clear, essential information, such as the product’s intended use and indications, the data used to test and train the model, known limitations, and clinical workflow integration.
  10. Deployed models are monitored for performance and re-training risks are managed.
 
This is not FDA’s first effort to influence the landscape around the machine learning and artificial intelligence used in medical devices. In 2019, FDA’s Center for Devices and Radiological Health (CDRH) issued a proposed framework for modifications to artificial intelligence/machine learning-based software as a medical device (SaMD). The document proposed requiring a premarket submission for artificial intelligence/machine learning-based SaMD when a software change or modification significantly affects the device performance or safety and effectiveness (RELATED: FDA Proposes Regulatory Framework for AI- and Machine Learning-Driven SaMD, Regulatory Focus 02 April 2019).
 
In 2020, radiologists pushed back against the use of artificial intelligence and machine learning in medical imaging applications, voicing concerns about “black box” learning environments that can present validation challenges. (RELATED: Radiologists to FDA: Autonomous AI not ready for prime time, Regulatory Focus 02 July 2020)
 
A horizon-scanning report issued by the International Coalition of Medicines Regulatory Authorities (ICMRA) in August called for the engagement of ethics experts in designing a regulatory framework for artificial intelligence and machine learning. The report’s authors spotted many of the problems addressed in the new guiding principles set forth by the three regulators. (RELATED: ICMRA: Address artificial intelligence challenges with permanent working group, Regulatory Focus 16 August 2021)
 
The public can comment on the 10 guiding principles through the public docket at Regulations.gov.
 
Good Machine Learning Practice for Medical Device Development: Guiding Principles

 

© 2021 Regulatory Affairs Professionals Society.

Regulatory Focus newsletters

All the biggest regulatory news and happenings.

Subscribe