Regulatory Focus™ > News Articles > Is FDA's Approach to Adaptive Clinical Trials Stunting Growth of Molecular Medicines?

Is FDA's Approach to Adaptive Clinical Trials Stunting Growth of Molecular Medicines?

Posted 20 May 2013 | By Alexander Gaffney, RAC

Is the right regulatory infrastructure in place in the US to accommodate a coming onslaught of pharmacogenomics data? No, it is not, and regulators need to consider a massive re-alignment of its clinical trials protocols, argues a new report by a think tank focused on the US Food and Drug Administration (FDA).

The report, authored by Project FDA's Peter Huber, senior fellow, and Andrew von Eschenbach, former commissioner of FDA, comes as increasing numbers of companies are looking to take advantage of molecular medicine-the ability to tailor medicines to molecular targets-to improve their chances of obtaining regulatory approval.

That data is driven by the falling price of genome sequencing, spurring its wider availability through direct-to-consumer services like 23andMe, a service that will screen a person's genome for common risk factors. But aside from its use to consumers, who might sleep better at night knowing that they aren't at risk for breast cancer, the larger benefit still might be for companies, who could comb massive databases to find common genetic risk factors and use those develop new therapies and companion diagnostic tests.

Is FDA Ready?

For Huber, however, FDA doesn't seem ready to accommodate this new development paradigm.

"The FDA has spent the last 30 years pondering how, if at all, molecular science might be shoehorned into the clinical trial protocols that Washington first used over 70 years ago and formalized in licensing rules developed in the 1960s," he writes. "The regulatory system is now frozen in the headlights. Its drug-testing protocols cannot handle the torrents of complex data that propel the advance of modern molecular medicine."

The problem, Huber alleges, is that FDA's current clinical trials paradigm is set up to account for selection bias. But with the aid of molecular tools, this bias could hypothetically be taken entirely out of the equation by targeting a therapy to patients with an exact genetic match. For example, Vertex's Kalydeco, recently approved to treat cystic fibrosis, is only approved for use in a small subset of CF patients with a specific known genetic mutation.

Has FDA Already Met Criticisms?

For its part, FDA has been working to accommodate some of these advancements, releasing a new guidance in December 2012 regarding what it refers to as "enrichment strategies" for clinical trials. As explained by FDA, the guidance is intended to support the development of tools to increase homogeneity in trials, identify high-risk patients, support predictive enrichment strategies, better design clinical trials, and work around pre-existing regulatory issues.

"In almost all cases, the strategies affect patient selection before randomization (with a few exceptions for adaptive strategies to be noted later)," FDA wrote. "These strategies, therefore, generally do not compromise the statistical validity of the trials or the meaningfulness of the conclusions reached with respect to the population actually studied."

For example, FDA said it would allow sponsors to select patients using genomic, proteomic or other medical measurements. "Trials of prevention strategies (reducing the rate of death or other serious event) in cardiovascular (CV) disease are generally more successful if the patients enrolled have a high event rate, which will increase the power of a study to detect any given level of risk reduction," explained FDA. Therefore, if a patient has a history of serious cardiovascular problems, they might be considered a good fit for the study, while another patient without such a history would not. In the absence of a complete medical history, other factors, such as a high resting heart rate, might be used as a proxy.

These strategies will generally allow for a smaller sample size to be used in the trials, FDA said.

The Use of Validated Biomarkers

But that guidance relies heavily on one aspect: The biomarkers or tests used to screen patients should, "to the extent possible," be validated and the connection between results and effect understood, FDA said.

And to Huber, that approach is somewhat problematic, owing to something of a "chicken or the egg" problem.

"For a drug to perform well, we need to select the patients to fit it," Huber explains. "Ideally, the in/out selection criteria will span all the patient-side molecules that will affect a drug's performance, in all the different combinations that occur in different patients. But most of the time, we don't know what all or even most of those biomarkers are-and we won't find out until we test the drug or drug cocktail in enough patients to expose them."

"The FDA doesn't know, either-and it doesn't want biomarkers involved in the licensing process until it does," Huber charges. "That is the biggest obstacle that now stands between us and the future of molecular medicine."

Huber goes on to note that he believes FDA is expressing frequent dissatisfaction with what he calls the "quality of biomarker science." While he concedes that their underlying concerns are well-intentioned ("If we get the linkage wrong, the FDA may end up licensing drugs that are useless or worse."), Huber argues that a "reasonably likely" framework of biomarker validation is ripe for reform.

The "catch," Huber says, is that most biomarkers can't be developed without first enrolling patients in trials. "FDA protocols allow very little of it, if any, to be developed during the front-end licensing trials," he explains. "So most of this invaluable predictive molecular science is developed after a drug has been licensed and prescribed to many patients-many of whom, we discover, should never have used it."

So while adaptive trials desire biomarker-driven approaches, FDA currently requires the submission of data built on non-adaptive clinical approaches, Huber argues. And that has ethical implications, either delaying the release of drugs to needy patients, enrolling patients for whom a drug will have no benefit, or enrolling patients for whom the drug will cause harm.

Potential Solutions

So what is there to be done? Huber has a few ideas.

"The problem for the FDA is that robust drug-biomarker science can't be fully developed without testing drugs in a broad range of biochemically different patients and carefully studying and comparing their responses," he explains. "That means removing the FDA's cherished blindfolds and replacing simple trial protocols that analyze comparatively tiny amounts of data with complex protocols that analyze torrents of data-not the kind of change that ever happens quickly in Washington."

Huber looks to another government report authored by the Presidential Council of Advisors on Science and Technology (PCAST), "Propelling Innovation in Drug Discovery, Development and Evaluation," as the framework of his proposal. That report included six components:

  1. The FDA should use its existing accelerated approval rule as the foundation for reforming the trial protocols used for all drugs that address an unmet medical need for a serious or life-threatening illness.
  2. The molecular science used to select targets and patients should be anchored in human rather than cell or animal data, and it can be developed, in part, during the clinical trials.
  3. The FDA should adopt "modern statistical designs" to handle the data-intensive trials and explore multiple causal factors simultaneously.
  4. The FDA should also "expand the scope of acceptable endpoints" used to grant accelerated approval.
  5. These initiatives should be complemented by greater rigor in enforcing and fulfilling requirements that follow up confirmatory studies that demonstrate the actual efficacy of drugs on clinical outcome, and the FDA should continue and possibly expand its use of reporting systems that track both efficacy and side effects in the marketplace.
  6. The FDA should also consider a process of incremental licensing that begins with accelerated approval for use of the drug only in treating "a specific subpopulation at high risk from the disease" when larger trials would take much longer or wouldn't be feasible.    

In addition, Huber argues that further elaboration of this "intermediate" benefit is necessary.

"Demanding a front-end demonstration that each piece will deliver clinical benefits on its own will only ensure that no treatment for the disease is ever developed," he quips. " An intermediate end point-'some degree of clinical benefit'-suggests that the drug is interacting in a promising way with a molecular factor that plays a role in propelling the disease; that is the best we can expect from any single piece. Even that requirement may be too demanding-used on their own, the individual constituents of some multidrug treatments that we need may never be able to deliver any clinical benefit at all."

Un-blind the Blinded

For Huber, the main issue to be solved here regards the blinding of studies. If doctors can access a drug approved via accelerated approval based on interim clinical benefits, those same doctors could "work out the rest of the biomarker science," spurring the development of additional pharmaceuticals. "The FDA's focus shifts from licensing drugs one by one, to regulating a process that develops the integrated drug/patient science to arrive at complex, often multidrug, prescription protocols that can beat biochemically complex diseases," Huber adds.

In other words, approve a drug based on interim data, allowing its true trial to begin in real-world conditions with real patients.

"If the analytical engine is doing its job well, the adaptive trial will progressively hone in on the taxonomic aspects of the disease-if any-that determine when a drug can perform well down at the molecular and cellular level, along with biomarkers that determine when the drug causes unacceptable side effects," Huber writes. "The drug's clinical performance should steadily improve as treating doctors gain access to the information that they need to predict when the drug will fit the patient. If performance does not improve, either the drug or the engine is failing; either way, the trial should stop. If performance does keep improving, the trial can start expanding again-more clinicians can enlist and treat more of the right patients."

Huber adds that FDA already has the ability to track post-marketing study commitments, which it regularly uses to monitor potentially risky drugs and pediatric trials, though he concedes that the real hurdle for implementing this type of system isn't efficacy so much as safety, requiring toxicity screening before adaptive trials can begin.

What remains to be seen now, however, is if Huber's call for a new paradigm gets the ears of those at FDA, and whether earlier attempts to approve products based on surrogate endpoints or with minimal testing-a trial of approvals, if you will-stymie regulators' enthusiasm for such an approach.

Read the full 30,000-word report here.

Regulatory Focus newsletters

All the biggest regulatory news and happenings.