rf-fullcolor.png

 

June 18, 2024
by Joanne S. Eglovitch

DIA: Pharma expert offers advice on AI best practices for industry

San Diego – Best practices for artificial intelligence in the pharmaceutical industry don’t need to be built from scratch, and instead could be based on guidelines developed by government entities and standards organizations, said Rose Purcell, director of global regulatory policy and innovation at Takada, at the DIA 2024 Global Annual Meeting.
 
The session also focused on comparative perspectives on regulating AI in the US and the EU, as well as some common themes expressed to regulators from the public as they mull future frameworks for regulating the burgeoning technology.
 
Purcell noted that the regulatory landscape for AI is “complicated and evolving” and any type of regulatory framework must consider patient privacy and data protection, including the EU’s General Data Protection Regulation (GDPR) and relevant US state laws, including those in California and Washington.
 
Best practices
 
Purcell also provided some insight on what these best practices should look like. For one, the “ideal” AI program should have data governance group that consists of a cross-disciplinary team including lawyers and regulatory experts, not only technical staff, as well as others “who understand AI regulatory governance.”
 
It should also include responsible AI accountability policies, a risk management framework, a central repository of AI/ML applications and a repository of algorithms to ensure that “we all know how to use this technology.”
 
There should also be a common set of AI terms and taxonomy. She noted that common terms, such as “validation,” can be used differently across various disciplines.
 
 
Purnell said that these best practices do not have to be written “from scratch” but can incorporate existing approaches already used by other government agencies and standard-setting bodies.
 
This can include the National Institute for Standards and Technology’s (NIST) recently developed risk management framework for AI, as well as another framework for generative AI. In addition, firms can reference the Human Rights, Democracy, and the Rule of Law Assurance Framework for AI Systems (HUDERIA), which was developed by The Alan Turing Institute for the Council of Europe.
 
Standards setting bodies such as ISO/IEC have standards covering AI such as 42001-2023 on AI management systems, as well as the ISO/IEC risk management standard on AI (ISO 3100-2018). Other standards include the ISO/IEC Guide 51 Safety Aspects (ISO Guide 51:2014), the IEEE Standard Model Process for Addressing Ethical Concern During System Design (IEEE 7000-21). Or these best practices can be leveraged by other countries such as Singapore’s AI Governance and Testing Framework and Toolkit.
 
All these frameworks include “core AI ethical principles” such as fairness, transparency, human accountability, and oversight.
 
Common themes
 
In terms of the regulatory space, Purcell noted some common themes echoed in the public’s comments to both the US Food and Drug Administration and the EU’s European Medicines Agency (EMA) recent discussion paper and reflection paper exploring a regulatory framework for AI.
 
The EMA issued a reflection paper on the use of AI in the pharmaceutical lifecycle in July 2023 and FDA’s discussion paper on using AI and machine learning in drug and biological product development was released in May 2023. (RELATED: FDA publishes discussion paper on AI in drug development, Regulatory Focus 12 May 2023)
 
Comments on these documents supported a flexible and risk-based regulatory framework and called for more clarity on a risk-based approach to regulating AI.
 
There were also requests for more alignment with international, regional and national legislative and regulatory frameworks.
 
Yet there were differences in the comments as well, respondents on the EMA’s reflection paper called for such a framework to adopt to existing rules ensuring a high level of health protection as well as clarity on how such a framework would work with related regulations. Respondents also wanted clear definitions of AI specific terminology and strong IP protections for innovators.
 
The comments on the FDA’s discussion paper sought clarity on the scope of FDA’s regulatory authority and called for more frequent informal communications with the agency on this topic.
 
Besides these efforts, other entities are also developing AI framework, including WHO’s Global Initiative on AI for Health. In addition, Purcell said the International Coalition of Medicines Regulatory Authorities (ICMRA) has dedicated “several workstreams” to AI.
 
DIA annual meeting
×

Welcome to the new RAPS Digital Experience

We have completed our migration to a new platform and are pleased to introduce the updated site.

What to expect: If you have an existing login, please RESET YOUR PASSWORD before signing in. After you log in for the first time, you will be prompted to confirm your profile preferences, which will be used to personalize content.

We encourage you to explore the new website and visit your updated My RAPS page. If you need assistance, please review our FAQ page.

We welcome your feedback. Please let us know how we can continue to improve your experience.