Welcome to our new website! If this is the first time you are logging in on the new site, you will need to reset your password. Please contact us at firstname.lastname@example.org if you need assistance.
Your membership opens the door to free learning resources on demand. Check out the Member Knowledge Center for free webcasts, publications and online courses.
Hear from leaders around the globe as they share insights about their experiences and lessons learned throughout their certification journey.
The Learning Portal will be under maintenance Monday, 6 December between 6 AM and 5 PM EST. Portal functionality will be unavailable during this window.
We apologize for any inconvenience caused during this time.
Posted 10 March 2016 | By Michael Mezher
At a public workshop in Washington, DC last week, top US Food and Drug Administration (FDA) officials and other experts explored the challenges and opportunities surrounding real world evidence (RWE) in regulatory decision making.
The question they sought to answer is whether data gathered from healthcare systems can be used to supplement or support regulatory decisions, such as the approval of new indications or label expansions for existing drugs.
FDA currently uses real world evidence most extensively for postmarket safety surveillance. However, Mark McClellan, director of the Duke-Margolis Center for Health Policy, says the next step is to use real world evidence to "improve our understanding of what works and what doesn't work in healthcare delivery and medical technology."
Aside from postmarket surveillance, FDA sometimes uses real world evidence from natural history studies and retrospective observational studies to support drug approvals for rare or life-threatening diseases.
McClellan says he sees real world evidence as a way to "bridge the knowledge gap between clinical research and clinical practice," but stresses that "more efforts are needed to explore how real world evidence could be incorporated into the regulatory framework."
"The application of existing and emerging approaches to real world evidence … may offer new pathways to address regulatory requirements," he said.
Typically, the data to support FDA approval for new products comes from highly-controlled clinical trials that are cordoned off from the greater healthcare system.
However, healthcare systems generate massive amounts of data—much of which is now electronic—including insurance claims data, patient or product registries and electronic health records (EHRs), which are already being mined for data to make informed clinical decisions.
Several efforts, including the National Institutes of Health (NIH) Collaboratory, PCORnet and FDA's Sentinel initiative are already working to use these data to improve clinical trial efficiency and drug safety monitoring.
Gregory Daniel, deputy director of the Duke-Margolis Center, said that real world data is already being leveraged to make clinical trials more efficient by making patient identification and recruitment faster.
Daniel also touched upon some of the challenges to using these data, saying the validity and reliability of real world evidence is often hard to ascertain, and that its sources are poorly linked and in many cases incompatible.
Richard Platt, chair of the Harvard Medical School Department of Population Medicine and principal investigator of FDA's Sentinel System, echoed this, saying, "We live in an environment where customization of EHRs to where they're incompatible is a feature and not a bug."
Janet Woodcock, director of the Center for Drug Evaluation and Research (CDER), got a laugh from the audience, joking that "data gathered from healthcare has always had one characteristic, it's not very good," but emphasized that her agency is very familiar with it, "warts and all."
The question on everyone's mind, Woodcock says, is "can we randomize people within the healthcare system to do a trial inside the healthcare system utilizing the data collection methods ofthe healthcare system?"
Woodcock added that it's easy to find "really terrible side effects … because they're very dramatic." But in terms of "huge treatment effects," she added, "that really isn't the problem with most drugs … most of them only have an incremental effect."
"That's where we need randomization, because you don't just have something that hits you in the face," Woodcock said.
If real world evidence can be validated and studied in a randomized way, Woodcock said she believes it could be used to support new indications, or expanded labeling, for existing therapies.
This is already happening, Woodcock said, but right now it's the exception rather than the rule. "Have we ever done this for an indication? Yes, but it's very rare, even for rare diseases."
Jonathan Jarow, chair of the medical policy council at CDER, called this the "holy grail," describing a situation where someone could "take an existing database, punch a button, and compare one drug to another therapy or device … show evidence of efficacy and get labeling of a marketable product in the US."
Despite the attractiveness of running a few queries in a database and getting data that could lead to a labelling change or new indication, there are still many challenges that need to be addressed.
One area where Woodcock says she sees promise would be to use data on off-label drug use to inform labeling changes after a product is on the market.
"In the future, maybe the second and third indication … you could do a randomized controlled trial, but do it utilizing the tools of the healthcare system. What has to happen in order for that to be acceptable evidence?" she asked.
This question brought Woodcock back to the issue of reliability, particularly with breast cancer, she said, as patient medical records often reflect the wrong stage of disease. Data from insurance claims, Woodcock said, also suffers from reliability issues, "because they're used for another purpose."
Robert Temple, deputy director for clinical science at CDER, said he sees promise for trials conducted using real world evidence to answer questions like how long patients should take a drug.
For example, Temple said a large study comparing multiple doses of a drug over time could be conducted within the healthcare system using EHRs to track patient outcomes over time.
"I think you could conceivably do such a study in a randomized environment," he said.
However, Temple said such trials would still require informed consent from patients as well as investigators to conduct the study.
"US law requires that they get consent if it's a trial being done for regulatory purposes," Jarow said, "So even though the common rule would exclude that as a part of standard care, FDA's current regulations do not."
Another challenge, according to Temple, is that healthcare records in the US might not be detailed enough to determine certain outcomes or endpoints, or even deaths. For example, Temple said, "they probably collect whether a person had a heart attack, but we don't know whether they make the diagnosis correctly."
"To the extent that the data that are collected are not precise and not good, they make it harder to show an effect … it would be very hard to imagine a persuasive non-inferiority study in this setting," Temple said.
Moments later, Temple pivoted, adding, "I'm not even sure you couldn't do a non-inferiority study" when an effect size is very large.
Clifford Goodman, director of the Center for Comparative Effectiveness Research at The Lewin Group, gave an example of how the disconnect between clinical trial data and real world evidence can impact a drug's uptake.
"Hans-Georg Eichler, [senior medical officer at the European Medicines Agency(EMA)], realized that the EMA was [approving] all these swell new drugs and biologics … and then after a while, he starts looking around—there was a wait-a-minute moment—when he said, 'We're approving all this stuff, but it's not being taken up in the market.'"
The issue, he realized, was that health technology assessment (HTA) bodies wanted different data than EMA was using to approve the drugs, such as data on rare adverse events and long-term efficacy.
"There's no longer this border—this boundary—between premarket data collection and postmarket data collection … he started learning about efficacy, effectiveness and then relative efficacy versus relative effectiveness," Goodman said.
"An important thing that RWE does … is that it re-balances the relative importance of evidence sources in support of decision making," Goodman said.
"If you talk now to someone, let's say a chief medical officer at a large PBM [pharmacy benefit manager] … they're saying 'Well, we used to depend on the regulatory submission data … but now, I got my own data, and I'm not talking about hundreds of patients, or thousands of patients, I've got data on millions of patients."
The sources for real world data are myriad, he added, "Yes, pragmatic or practical clinical trials are a source, payment claims, pharmacy prescriptions and bills, registries, EHRs, EMRs, laboratory test results, radiographic images, biobanks (specimens, tissues and so-forth), molecular genomic data, vital statistics data and quite important lately … patient generated data … it's not just that though, it's credit card purchases, it's how far you live from the Whole Foods … who lives with you, do you own a pet?"
Combined with new analytic methods, such as data mining and machine learning, Goodman believes researchers can identify new patient clusters that would otherwise go unnoticed that may respond differently to a treatment than other patients with the same condition.
Tags: Real world evidence, RWE, Real world data, Janet Woodcock, Robert Temple, Sentinel
Regulatory Focus newsletters
All the biggest regulatory news and happenings.