rf-fullcolor.png

 

September 9, 2024
by Ferdous Al-Faruque

EMA, HMA publish large language model principles for regulators

The European Medicines Agency (EMA) and the Heads of Medicines Agencies (HMA) have agreed to a set of guiding principles for their staff when using large language models (LLM) to make regulatory decisions. Artificial intelligence (AI) virtual assistants or chatbots use LLMs to formulate results, and the agencies foresee regulators using such technology in their work.
 
"Whether they are used to query the extensive documentation regulators routinely receive, to automate knowledge/data mining processes, or as virtual AI assistants in everyday administrative tasks – LLMs have enormous transformative potential," said EMA. “However, LLMs also present challenges, e.g. variability in results, returning of irrelevant or inaccurate responses (so-called hallucinations), and potential data security risks.
 
"The purpose of the guiding principles is to build understanding of the capabilities and limitations of these applications among staff at regulatory agencies across the EU so that they can harness the potential of LLMs effectively and avoid pitfalls and risks," the agency added.
 
The guiding principles includes general ethical considerations for using LLMs and draws from several sources, including the European Group on Ethics in science and new technologies (EGE), the High-Level Expert Group (HLEG), and academia. The ethical considerations include principles such as human dignity, autonomy, rule of law, accountability, security, safety, transparency, non-discrimination, misinformation harms, and malicious uses.
 
"The protection of personal data also deserves particular attention," said EMA. "While this document focuses on the use of LLMs, the processing of personal data in LLMs can occur during their development, implementation, and use without necessarily being obvious."
 
The document delves into both user-level and organizational-level principles. On the user level, EMA states that stakeholders should ensure data is being input into the LLM responsibly and safely, continuously learn how to use LLMs effectively, and consult and report issues that may arise with the LLM data. On the organizational level, the agency notes that there should be principles that define how LLMs should be used safely and responsibly, maximize the value of LLMs by providing proper training, and collaborate and share experiences.
 
"Considering the fast-changing nature of AI, sharing experiences as a network is key as it reduces uncertainty, promotes a quicker common understanding and acts as a regulatory knowledge management tool that can help agencies shape their investment and experimentation," said EMA.
 
EMA also noted that the LLM guiding principles are part of the EMA and HMA's multi-annual AI workplan that started in 2023 and is expected to end in 2028. The agency noted that the guiding principles will continue to evolve as needed, and it plans to hold a webinar on 13 September to share the document with EMNR.
 
Guiding principles
×

Welcome to the new RAPS Digital Experience

We have completed our migration to a new platform and are pleased to introduce the updated site.

What to expect: If you have an existing login, please RESET YOUR PASSWORD before signing in. After you log in for the first time, you will be prompted to confirm your profile preferences, which will be used to personalize content.

We encourage you to explore the new website and visit your updated My RAPS page. If you need assistance, please review our FAQ page.

We welcome your feedback. Please let us know how we can continue to improve your experience.