This article discusses challenges for researchers pursuing biomarker research and collecting speciality laboratory data increasingly used for FDA drug submissions and requiring datasets that meet new regulatory standards. The authors review new technologies for biomarker data management, leveraging technology and subject matter expertise aimed at gaining improved flexibility, efficiency and compliance.
New US Food and Drug Administration (FDA) data exchange standards now require delivery of Clinical Data Interchange Standards Consortium (CDISC)-compliant data sets.1 This regulatory change presents a unique challenge for drug developers who pursue innovative clinical research incorporating biomarkers and specialty lab data. While specialty lab data are increasingly being utilized in FDA drug submissions, new approaches are needed for managing, processing and organizing biomarker data in an efficient, CDISC-compliant way. This can be successfully achieved with the use of new technologies specifically engineered for complex specialty labs combined with biomarker data experts and processes implemented in parallel to traditional clinical data management.
Biomarkers and specialty labs are core to modern clinical trials. The broad and ever-growing set of assays can range from targeted panels to high-content or high-throughput experiments. Techniques are seemingly boundless, especially in complex biological domains like oncology, immunology and genetics—with complex data being generated from flow cytometry, next generation sequencing, immunosequencing, mutational analysis, gene or protein expression, immunohistochemistry, circulating tumor cells, cytogenetics, and other technologies.
When assessing drug efficacy and/or safety, evaluating biomarkers in clinical trials and integrating specialty lab data with pharmacokinetic, safety labs and clinical data provides regulators with a more complete picture of the efficacy and/or safety profile for the drug. This activity presents a unique challenge for drug developers as they simultaneously endeavor to conduct innovative clinical research and also comply with regulatory requirements for submission. Prior to 2017, there was still some flexibility in how data could be submitted to FDA. This changed in December 2016 when FDA's binding guidance document on study data exchange standards came into full effect.2
Studies now must use the appropriate FDA-supported standards, formats and terminologies specified in the FDA Data Standards Catalog for New Drug Application (NDA), Abbreviated NDA and certain Biologics License Application (BLA) submissions.3 The current catalog specifies use of the Clinical Data Interchange Standards Consortium (CDISC) Study Data Tabulation Model (SDTM), Standard for Exchange of Nonclinical Data (SEND), Analysis Data Model (ADaM), and Define-XML, as well as CDISC Controlled Terminology.
Incorporating Specialty Lab Data into Submissions: Challenges and the Need for Technology
Delivery of CDISC-compliant data sets for downstream use is a time-sensitive component and rate-limiting step of activities post-database lock. Adding the handling of complex, often "messy" biomarker data within the traditionally rigid, process-driven SDTM workflow, and to do so without consideration of new ways of dealing with these data, is a recipe for failure.
When it comes to delivering specialty lab data, the process is further complicated by data exchange standards that - in many cases - are still developing. Once more, delivering specialty lab data through SDTM programming, that typically requires expert input to determine how complex lab data can be mapped appropriately, is also complicated. In order to transform raw data files into CDISC-compliant data sets, even when implementation guides exist, mapping requires an in-depth understanding of the complexity, quality control and processing steps appropriate for each assay.
The next evolution of managing specialty lab data requires not only the aforementioned subject matter expertise, but also technologies to enable faster turnarounds and more robust pipelines. For context, Electronic Data Capture (EDC) technology and lab management tools are regularly used in the more established workflow for collection of clinical data and standard local lab information to accelerate data processing. Technology engineered for specialty lab data could have a similar impact on industry's ability to deliver these data, under the same regulations, and in an efficient, high-quality manner.
Technology-Enabled Approach to Managing Specialty Lab and Biomarker Data for FDA Submission
Success in data management depends on the right combination of technology, people and process. Over the past decade, clinical data management technology has advanced considerably. However, specialty lab and biomarker data continue to be managed external to EDC systems and often separate from traditional clinical data management workflows altogether, even at a time when data are being incorporated into submissions to provide insights on pharmacological effects and to aid in interpreting the safety and effectiveness of a compound. As such, biomarker data will be subject to data exchange standards for FDA submission—new approaches need to be developed for managing, processing and organizing biomarker data in an efficient, CDISC-compliant way.
Just as technology revolutionized industry's approach to clinical data management, new technology specifically engineered for complex specialty labs, combined with biomarker subject matter experts and associated biomarker data management processes, can form the foundation of a rigorous, agile biomarker data management approach for clinical trials (Figure 1).
Figure 1. EDC-Enabled Clinical Data Management and Programming Workflow, From Point-of-Collection to SDTM Data
EDC-enabled clinical data management and programming workflow, from point-of-collection to SDTM data, is shown in Figure 1 in parallel with a new model for biomarker data management that aligns with and enhances the traditional workflow.
This approach illustrates how disparate sources of biomarker data can be harmonized and stored for more effective on-study and downstream use. From here, experts with knowledge of specialty labs and CDISC standards can map biomarker data to the appropriate SDTM domain.
By harmonizing and storing the data in a centralized database, downstream efficiencies can be realized through more effective variable mapping and development of reusable macros and codes. In situations where the pipeline is established, execution can be technology-driven and access to data as simple as pushing a button, e.g., "Download CDISC data sets." Ultimately, this approach to biomarker data management, leveraging technology, and subject matter expertise, provides flexibility, efficiency, and compliance. Immediate gains can be realized by implementing such an approach to augment traditional clinical data management.
- Providing Regulatory Submissions in Electronic Format – Standardized Study Data: Guidance for Industry. December 2014FDA website. https://www.fda.gov/forindustry/datastandards/studydatastandards/default.htm. Accessed 18 January 2018.
- FDA Data Standards Catalog v. 4.10. October 1017. FDA website. https://www.fda.gov/forindustry/datastandards/studydatastandards/default.htm. Accessed 18 January 2018.
About the Authors
Jared Kohler, PhD, is senior vice president, translational informatics and biometrics at Precision for Medicine. He is a specialist in precision medicine–focused drug and diagnostic development and has pioneered approaches in biomarker data management and has led biometrics operations on more than 80 clinical studies. He has a PhD in human genetics with a concentration in statistical genetics from the Johns Hopkins University School of Medicine and a BS in biochemistry from Juniata College. He can be contacted at email@example.com.
Tobias Guennel, PhD, is senior director, translational informatics and biometrics at Precision for Medicine. He has extensive industry expertise across the pharmaceutical and biotech space—regulatory submissions, translational informatics, biomarker research and development with a focus in precision medicine-guided diagnostic and drug development. He has led the development of technologies to manage, visualize, and mine biomarker data. He received a PhD in biostatistics, with a concentration in statistical genetics and genomics, from Virginia Commonwealth University. He earned his MS in applied mathematics and computer science from University of Technology Chemnitz, Germany, and received his BS in mathematics from Longwood University, Farmville, Virginia. He can be contacted at firstname.lastname@example.org.
Cite as:Kohler, J. and Guennel, T. "Biomarker Data Management in Clinical Trials: A Model for Success Under New FDA Data Exchange Standards." Regulatory Focus. January 2018. Regulatory Affairs Professionals Society.