Toronto, Ontario Webmaster

Enhancing the Efficiency of Adaptive Platform Trials Through the Exploration of Alternative Treatment Ranking Methods (MSc Defense)

Read Post

Presenter: Abigail McGrory

Supervisory Committee: Anna Heath (Supervisor), Michael Escobar, Haolun Shi

Chair: Tony Panzarella

Date and Time: Tuesday, August 20, 2024, 11:00-13:00pm EST

Location: 155 College Street, Health Sciences Building, Room 734

Abstract: Background: Adaptive platform trials (APTs) benefit patients by providing a flexible and versatile clinical trial design compared to randomized controlled trials. APTs can evaluate multiple interventions for a single disease which avoids the need to run multiple trials. A common approach to compare treatments in an APT is to calculate the posterior probability that an intervention is the most efficacious (Pbest). This method, however, has been criticized when used in other research areas. The purpose of this study is to identify and assess alternative methods to concurrently evaluate multiple treatments in an APT. The goal is to increase the efficiency and accuracy of APTs to save time, resources, and patient lives. Approach: Three alternative ranking methods were identified through a literature review. These were (1) Surface Under the Cumulative Ranking Curve (SUCRA), which summarizes the total ranking distribution, (2) Mean Posterior Rank, which averages the ranks for each treatment and (3) Pairwise Comparison, which compares the value of the treatment effects through all pairwise rankings. A simulation study was implemented across multiple trial designs. The ranking methods were applied to each trial design, and the power and expected sample size of the trials were used as metrics to assess their performance. The superiority and futility thresholds were optimized for each trial to ensure a fair comparison. Results: SUCRA, Mean Rank and Pairwise performed similarly with power of approximately 72% and an expected sample size of about 140 patients. Pbest resulted in a power of 72% and an expected sample size of approximately 120 patients. The results were consistent across each trial design. Conclusion: Pbest is the strongest method to compare multiple interventions in an APT. It maintains the same power as all other ranking methods but reduces the number of patients needed in the trial. However, this is only true if Pbest is used at the optimal decision threshold, which differs from thresholds used in current APTs. Further research should include the implementation of complex trial designs and non-normal outcomes to confirm the results are applicable to all APTs.