Regulatory intelligence is at the heart of every well-informed regulatory decision, and is integral to maximizing effectiveness and influence for the regulatory professional. The Regulatory Intelligence Quotient is a regular exploration of regulatory intelligence topics by thought leaders in the field. Want to learn more or suggest future topics? Contact us at email@example.com
In the last Regulatory IQ article, we talked about average review times for 510(k)s submitted in different months of the year. This prompts the question: What other factors contribute to review time variability? The US Food and Drug Administration's (FDA) Center for Devices and Radiological Health (CDRH) has a goal to render a decision of substantially equivalent (SE) or not substantially equivalent (NSE) within 90 days. Some take considerably longer, while others seemingly breeze through in half that time.
To illustrate this variability, we used the SOFIE System by Graematter to look at review times for one of the busiest review committees, radiology, and the product code at the top of the list in terms of number of submissions filed, LLZ (radiological image processing system). If we compare review times for 510(k)s for three companies (we’ll label these Company A, B, and C) during the years 2004-2013, with the overall average review times for all companies, here’s what we see:
So while average review times were trending upward in general during this period (shaded blue), we see considerable difference in results for the three companies. Company A’s average review times were better than or equal to the average for the LLZ product code every year in our timeframe – a pretty good record. Company C comes in a close second, with 9 out of 10 years equal to or better than the overall average. Company B didn’t fare nearly as well, and met or beat the overall average just 5 of the 10 years. Note that in 2010 companies B and C had only one clearance each, but each of these companies saw a big jump in review times.
By plotting the variance from the overall average, we can better see the disparities between the companies’ performance.
We see that companies A and C typically met or beat the average, whereas average review times for Company B were often higher than the product code average, and were more variable year to year.
For another example we’ll look at lasers (Product Code GEX), reviewed by the General and Plastic Surgery Committee.
Just as we saw with LLZ, it’s apparent that overall review times for GEX are trending slightly higher. All three of these companies generally do better than the average for the product code, but Company E has more consistency in its performance, as illustrated again in the chart below:
So how does a regulatory professional work toward achieving results like the ones we see for companies A and E? Although we can’t know the details surrounding these companies’ individual submissions, document preparation and clear, timely communications are key to timely, successful submissions. In a June 2014 report that assessed (among other things) the 510(k) submission review process, both submission quality and sponsor/reviewer communications were cited as critical factors affecting review times. Specific issues include:
- discrepancies or missing data
- inconsistent, unstructured application format
- clinical deficiencies
- wrong submission type used
- inappropriate predicate device
- incomplete or late response to FDA
While every company has deadlines for getting submissions into FDA, investing the time required to create quality submissions can pay off big returns in getting a product to market faster.
RAPS 2014: Join The Regulatory Convergence for a pre-conference workshop on True Regulatory Intelligence for Medical Devices.. For details, visit http://connect.raps.org/2014raps/2014raps-home/workshops-2014raps
Let us know what you think. To contact us with your thoughts or to request more information, email firstname.lastname@example.org or connect with us on LinkedIn, Twitter and Facebook.