×
We recently upgraded the website!  If you run into any issues, please Contact Us.  We'd also love to hear your feedback!  Enjoy exploring the new site!

rf-fullcolor.png

 

15th May 2026
by Ferdous Al-Faruque

Euro Convergence: Experts share insights on AI for submissions, conformity assessments

EuroConvergence2_260515_ALF.png
Left to right: Sebastian Fischer, Erik Vollebregt, and Tom Patten. (credit: Ferdous Al-Faruque)

LISBON – A panel of experts at RAPS Euro Convergence 2026 discussed how artificial intelligence (AI) is being applied throughout the product lifecycle, from compiling premarket application submissions to assisting in the conduct of conformity assessments.

While the AI can streamline the regulatory process, the experts noted that the responsibility for ensuring the accuracy of its outputs lies with the manufacturer or authority using the technology.

Panel moderator Erik Vollebregt, a partner at Axon Lawyers, said that the use of AI in conformity assessments is coming at a critical time, as industry and notified bodies are trying to cut costs. He noted that this is happening as notified bodies have faced a capacity bottleneck, especially regarding the transition backlogs for the Medical Device Regulation (MDR) and In Vitro Diagnostics Regulation (IVDR), time-bound assessments, and the growing complexity of the technical files they must evaluate.

"People are asked to do more and more with less and less and less time," said Vollebregt. "Manufacturers are leveling up, so that means that everybody else needs to level up as well.

"I know from personal experience that it is indeed not only notified bodies that are looking at this... even competent authorities are also considering leveling up and seeing how they can do this," he added. “I know at least three that are currently working on projects to operationalize AI for market surveillance, for example."

Vollebregt also noted that manufacturers and competent authorities are also using AI in their workflow, and noted that the quality of AI foundation models have also advanced to the point that they can now read, cross-reference, and summarize technical documents that are useful to qualified reviewers He said that the question was not whether AI should be used for regulatory purposes but do they meet the safeguard requirements that notified bodies have to operate under.

"What I've seen personally with some notified bodies is you can actually see that auditors are adopting AI because suddenly questions become more polished and also more prolific," he added. "For me, that's a clear sign that somebody is using AI."

Vollebregt listed several advantages AI can offer to notified bodies, including improved efficiency, accuracy, the ability to process large volumes of documents, consistency, and traceability. He also said the technology is being used in conformity assessment workflows for functions such as dossier completeness checks, ensuring consistency of concepts, managing checklists, drafting documents, and addressing submissions in multiple languages.

"I've used AI myself recently in arguing with a competent authority about the interpretation of a specific linguistic unit in a provision of the MDR, and there AI can be super useful, because it can analyze the concept in all 24 official languages and then say whether a particular interpretation is supported by the majority or minority of languages," said Vollebregt.

Vollebregt also discussed the guardrails in place for how notified bodies can use AI, including ensuring all findings and decisions approved by qualified notified body staff are based on human oversight; the work cannot be subcontracted, there needs to be documentation to ensure traceability and versioning. He added that there are expectations for validation, competency, confidentiality, and decommissioning confidential data to ensure that it is permanently deleted.

"In the end, especially for notified body operations, we are still working on a legislative paradigm that is based on qualification of people and not qualification of AI models," said Vollebregt. "That means that the human in the loop is essential.

"The human should not only be in the loop [but] the human should also be deciding," he added. "With notified bodies, for certification decisions, there is also a layered human decision system."

Vollebregt said manufacturers and even notified bodies could ultimately use AI to manage their technical documentation and quality management system structures. He envisions a future in which a blockchain structure could be used to manage integrity, and manufacturers and notified bodies could have their own AIs with certain guardrails that can process administrative matters and escalate them to human decision-makers when needed, while otherwise being able to sign off on simpler issues.

Sebastian Fischer, a regulatory strategy principal at TÜV SÜD, said his notified body uses an internal AI system called IDAS (Intelligent Document Analysis Solution) that has been training on manufacturers' data provided by volunteer manufacturers. He said the real value of the AI is performing simple, basic tasks such as conducting completeness checks, identifying text block comparisons for consistency checks, recognizing references to ensure the document is referring to the right standards and guidance, creating track changes, and performing search functions. He echoed Vollebregt's sentiments and said they are considering machine-to-machine communication that would allow their AI to interact with a manufacturer's AI.

Tom Patten, international IVDR manager at GMED, said his notified body uses an in-house application portal called GMED Connect, intended to be used from the application process through certification, and they are considering applying AI to it. He said everything is done online on the portal: the user registers their application, trial site, and devices, then receives a quote.

Patten said that GMED made a deliberate decision not to rush into AI and instead developed its digitalization platform before deciding which parts of it needed to be integrated with AI. Unlike how TÜV SÜD trained IDAS, he said GMED is in conversation on how they could access and review manufacturer data without having to move it from the manufacturer's own systems. He said they are also very concerned about cybersecurity and have been discussing how best to protect the data if they move it into their own system and responsibly use AI on it.

"That's kind of all underway," said Patten. "It's at various stages, but that's the approach we're taking. It is going to take us a little bit longer to get there."

However, Patten noted that GMED is doing similar work to TÜV SÜD with its system for simpler tasks, such as pre-conformity assessment checks, and integrating it into the conformity assessment process.

"Ultimately, what we're looking for is something that's flexible, but the user experience from the notified body needs to be consistent," said Patten. "We need flexibility to allow every type of manufacturer to work with their data.

"But then we want the user experience from our side to be as consistent as possible," he added.

During the Q&A portion of the discussion, an attendee presented a scenario in which a manufacturer uses AI to generate conformity assessment documents that include an error missed by the notified body, the product ultimately receives a CE mark before it goes on the market, and something bad happens. They asked who is legally responsible for the AI's error?

Vollebregt said that ultimately it was the human who decided what goes on in the submission, and so they bear the responsibility.

"From a liability perspective, this is basically as it always was, people are very quick to blame AI for," he added. "This is also something we see throughout history with basically every new technology."

Fischer agreed with the sentiment and said that AI is no excuse to become lazy, and there should be adequate checks and quality assurance in the processes to catch any errors made by the AI.

"Decisions always have to be made by humans and be based on actual text we receive, on the actual documentation, and not on some summary output coming from an AI model," he added.

Patten chimed in, saying people tend to be creatures of convenience, looking for the easiest path, and that AI offers a lot of convenience, but unfortunately, there's a conflict between convenience and doing things appropriately and responsibly. He noted that under MDR/IVDR Article 10, which sets out the general obligations of manufacturers, it is ultimately the manufacturer's responsibility to confirm the accuracy of their documentation, and detecting AI errors is another error to be considered.