FDA’s Tom Peter and Abbott’s Jad Wafeeq spoke at MedCon 2026 (credit: Jeff Craven)
COLUMBUS, OH – The difference between an anomaly and a defect in product software can be confusing, but it is important to identify which is which to properly handle problems that crop up in product software, two panelists recently told attendees, at the 2026 MedCon conference, sponsored by the AFDO/RAPS Healthcare Products Collaborative.
Tom Peter, senior operations officer in the Office of Medical Device and Radiological Health Inspectorate (OMDRHI), Office of Inspections and Investigations (OII) at the US Food and Drug Administration (FDA), said that there is “some ambiguity” around what constitutes a defect and how that relates to an anomaly in product software.
Under international standards, an anomaly is a condition that differs from a product’s requirement specification, design documents, and standards as well as from what someone perceives or experiences from a software product. A defect can be considered an imperfection or deficiency within a product that does not meet a requirement or specification.
“The important thing to understand from this is that the relationship between the two,” Peter explained. “Defects are a subset of anomalies—so all defects are anomalies, but not all anomalies are necessarily defects.”
Peter said that anomalies should be considered a “starting point” in an investigation to discover potential defects in software. He noted that companies sometimes use different terms, but problems can occur when those differences exist of an organization. If a company is not on the same page with the terminology, FDA staff at a facility may start inquiring more questions about the process to identify and manage these issues to ensure it is well controlled, he added.
“The bottom line is, we don’t really care what you call it, it’s just that everyone in the organization has a common understanding,” he said.
Anomalies can come from many different sources, including internal sources such as reviews, testing, analysis, and code compilation, or through external sources like customer feedback, complaints, and during servicing. Regardless of the source, anomalies need to be captured by and documented in a company’s quality management system followed by an evaluation of the risk of the anomaly, Peter said.
Once an investigation into an anomaly is completed and a defect is uncovered, the company should conduct a risk-based evaluation of the defect. If no defect is found, the company should add the anomaly to a known anomalies list. Based on the results of the risk-based evaluation, the defect should have a fix implemented followed by a verification and regression testing if it is high risk, or the fix can be deferred to a later release if it was deemed not high risk.
Peter noted that FDA expects companies have procedures for reporting and reviewing anomalies regardless of the source, evaluation of the safety risk, an investigation into the root cause of the anomaly, and documentation and tracking of unresolved anomalies. He said that companies should have a “defect prevention mindset” for their product software.
“[W]e unfortunately see sometimes too much of an overreliance on testing and hoping that you identify all of your defects at that point, which, unfortunately, we know testing is not a perfect activity,” he said. “We need to think about how we’re going to prevent defects, not just detect them.”
Tips for handling defects
Jad Wafeeq, director of quality business support, software, at Abbott, echoed Peter’s point about being proactive in anticipating defects. He said it is important to not wait until testing before looking for defects.
Defects can be inserted into software at any stage of the lifecycle, including during a requirements analysis, the software design, software implementation, testing, configuration management, and deployment. Anticipating these threats at every stage of the medical software development lifecycle can help identify defects before testing.
“Use the defect threat model as a concept and apply it and see where these defects may get injected,” he said.
Establishing clear requirements for software can also reduce defects, Wafeeq explained. One test for determining clear requirements is having an engineer be able to describe the necessity of a requirement in 10 seconds or less. The definition for when a requirement should be considered done needs to be defined within the requirement, he noted.
“What I’m seeing [is] that most of these defects are described in the requirement themselves,” he said.
Teams should assess the actionable risk of defects early, Wafeeq said. “I think if you do the proper requirement and traceability … and you have the proper software tool, ideally you can get to evaluate these risks very quickly,” he said. “We deal with a lot of defects, and we cannot treat them all equally. The trick is how we can quickly triage them, assess their risk and make sure that they are prioritized correctly to be remediated.”
It is also important not to lead with debate and blame, he noted. When his team comes to him with an issue, Wafeeq said his first step is to ask about the nature of the defect, rather than why it occurred.
“I don’t care about what led to it for now, until we triage it and make sure it’s risk assessed, and then we can look to the root cause,” he said.
Here, bringing in a patient advocate, such as a member of the product management team or clinical team, can be useful. “They will re-navigate the discussion from who broke it to patient safety,” Wafeeq said.
Defect reviews should also be routine, with repeatable, consistent, and predictable processes with checklists.
“It's a continuous cycle, so you have to do it repeatedly,” Wafeeq said. “I would ask the team to use a standardized process checklist for every defect evaluation and make sure that no stone is left unturned.”
There should also be a transparent, no-fault escalation process for defects where messengers are encouraged to report, rather than hide, problems in product software.
“Bad news doesn't get better with age,” Wafeeq said. “[T]he snowball effect is no different even when you’re in production, so even [if] you have a few deployments now with software that you just released, it’s better to know these issues as soon as possible.”
Defects also need to be a learning process for the team. “Usually, when we see issues happen in the system, sometimes we focus on that area of the software, but we don't do a full analysis,” he said. “It may be in other places in the software, or we get to repeat the mistake again when we have a new feature that we are developing.”
Using defects as fuel for continuous learning can help an organization enhance their software development standard, requirement clarity, testing strategy, and verification rigor, Wafeeq noted.
“[M]aintain a defect knowledge base, look to defect playbooks, look into your process, and try to learn from these defects and hopefully mature your team,” he said.