Utilizing Bayesian Hierarchical Model for Clinical Trial Quality Design
Explore how a Bayesian Hierarchical Model can be leveraged to design quality into clinical trials and ensure compliance with ICH E6 R2 Quality Tolerance Limits. Learn about the Risk-Based approach, Quality Tolerance Limits methodology, and the application of Bayesian modeling for early phase studies.
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
E N D
Presentation Transcript
Utilizing a Bayesian Hierarchical Model to design quality into a clinical trial and allow compliance with ICH E6 R2 Quality Tolerance Limits and what approach can be used for Early Phase Studies PSI Webinar 2ndDecember 2020 Christine Wells Senior Statistical Scientist/CDM Technical Leadership Roche Products Ltd Contact Details: chris.wells.cw1@roche.com
INTRODUCTION ICH E6 (R2) recommends a risk-based approach to quality management that comprises 7 key steps: 1) Critical Process and Data Identification 2) Risk Identification 3) Risk Evaluation 4) Risk Control 5) Risk Communication 6) Risk Review 7) Risk Reporting
BACKGROUND Historically a one size fits all approach to Quality Addendum requires a fit for purpose approach One component of the Risk Based approach is the concept of Quality Tolerance Limits (QTLs)
WHAT ARE QUALITY TOLERANCE LIMITS? Quality Tolerance Limits (QTLs) are: Related to parameters assessed at a study level A threshold used to identify possible systematic issues Established for trial parameters Requirements of a QTL ICH states that QTLs must be predefined before study start They must use Statistical characteristics & profound knowledge Recommended to have 3-5 QTLs per study Must be reported in the CSR
QTL METHODOLOGY Bayesian Hierarchical Model used to estimate distribution percentiles Create distribution plots and percentile estimates for historical studies (Reference Studies) and use this information to set Quality Tolerance Limits and associated thresholds (Secondary Limit, Trend Alert) for three Quality Parameters: Under and Over reporting of AEs, Over reporting of SAEs (optional) and Major Protocol Deviations Bayesian Hierarchical Model (BHM) inspired by Bayesian meta analysis example in Berry et al (2011) Data~Pois( i*exposurei), AER i~ (shape,1/scale) , Scale & Shape~Exp( ) BHM: posterior p( ,shape,scale|Data) likelihood p(Data| ) p( |shape, scale) prior p( ) a hierarchy with distribution of prior informed by historical data & profound medical knowledge Use Markov Chain Monte Carlo (MCMC) algorithm to generate the posterior distribution for the parameters (10,000 simulations)
OUTPUTS FROM THE MODELLING OF THE ADVERSE EVENT RATE PARAMETER - Reference Study
OVERVIEW OF QTLS AND ASSOCIATED THRESHOLDS Default comparisons used are as follows: Study level QTL set at 5thand 95thpercentile estimates from Reference study to define the acceptable range for the event rate of the study, to control the risk of under and over reporting respectively, for comparison with the Target (ongoing) Study median Study level Secondary Limit set at 20thand 80thpercentile estimates from Reference study for under and over reporting respectively, for comparison with the Target (ongoing) Study median Trend Alert default uses the Study Level QTL (set at 5thand 95thpercentile estimates from Reference study for under and over reporting respectively), for comparison with the Target Study 10th and 90th percentiles respectively Once limits are set, BHM model is run on the ongoing study (Target Study) to estimate distribution percentiles Further, the methodology (inclusion of the Trend Alert) ensures site-to-site variability is within expected range and identifies sites possibly out of process control to enable the protection of the study median event rate/proportion. Methodology run every 12 weeks on study to allow action and mitigation steps to take effect Methodology applicable to different data types but currently only the event rate and proportional data have been programmed.
OUTPUTS FROM THE MODELLING OF THE ADVERSE EVENT RATE PARAMETER TARGET STUDY Summary of Breaches in Target Study
OUTPUTS FROM THE MODELLING OF THE ADVERSE EVENT RATE PARAMETER SITE LEVEL AND STUDY LEVEL PLOTS Site Level Plot Study Level Plot
FURTHER PARAMETERS TO CONSIDER The current methodology allows for any parameter that is a rate or a proportion to be modelled, therefore teams can pick their important parameters. Time to event data is currently being programmed Examples of Quality Parameters given in TransCelerate Guidance Document: % of randomized subjects not meeting per protocol population criteria % of subjects with premature drug discontinuation % of subjects classified as lost to follow-up in the close-out period of the trial Examples from Industry: PDs (baseline and rate) Proportion of patients with early discontinuation Proportion of Patients lost to follow up Proportion of patients withdrawn due to AE Proportion of Patients Incorrectly randomised Proportion of Patients with premature drug discontinuation Proportion of patients not reaching primary endpoint Proportion of Ineligible patients No of SAE/AESI reported late (requires Operational Data) Time to progression
EARLY PHASE OR SMALL STUDIES Some Phase 1 studies are too small and the timing is too fast, in order to utilize QTLs. Many parameters only have small number of events (e.g proportion of patients with early discontinuation from study drug) Difficult to monitor Sample size parameters give guidance on worst case e.g Proportion of patients discontinuing. Simple method: Team estimate worst case scenario, expected cases and a secondary (warning) limit is set in between e.g Proportion of patients with early drug discontinuation: Team expect approx. 5%, Worst case scenario 20%, secondary limit 15%. Team then monitor on a monthly (or weekly) basis to ensure action is taken if secondary limit is approaching. Further we have a set of interactive outputs that allow the teams to ensure they keep oversight of the rate of aes/deviations adjusting for duration in study
CHALLENGES Access to Historic Data Teams understanding Small studies Selecting the appropriate event exposure period - needs to have constant rigour of reporting and constant event rate. Difference between accurately estimating event rate and monitoring quality of reporting over a chosen period Teams often have little knowledge of the datasets (sdtms) and the variables we are using (particularly reference start and end date); they are dealing with the data at other stages in other data repositories Complex designs; cohorts, parent: child designs When to adjust the QTLs; teams are tending to leave a study to breach every run After the first run, the focus moves very much to the target study (because there is no such thing as a perfect ref study to match on) and then it will feel like you are cheating by moving the QTLs, but you do it with understanding and to guard against future drift. The historic data is more of a starting point than a strict guide Concept of QTLs doesn't quite fit, because clinical trials are not like manufacturing. In many ways we are having to make it fit and work out how it can aid us in monitoring trial quality
CONCLUSION Robust methodology allows teams to model the event rate of historical studies to inform the expected event rate of the study under investigation. Provides full knowledge of Parameters Provides insights of site adherence to reporting processes Designs quality into a clinical trial Further and most importantly safeguards patient safety Phase 1/small studies may find QTLs inappropriate or a QTL light method may be utilised