Randomised trials are not always the Holy Grail
Evidence is the base of medicine, but common sense is the salt of it (Slava Ryndine in Schein’s Common Sense Emergency Abdominal Surgery).
Medical scientific studies aim to gather high-quality evidence to enable healthcare providers to deliver optimal care and improve patient health. To this end, other study designs may be preferred to the RCT. It is certainly not the intention to knock the RCT off its methodological pedestal, but rather to provide this design with a critical note and point out to the reader the potential benefits of alternative study designs within the (surgical) research world.
Within the medical world, randomized comparative research, RCT, is considered the gold standard for determining the effectiveness of a given intervention and is thus the cornerstone of evidence-based medicine. But this design also has some shortcomings. Allocating treatment by lot is workable when evaluating an experimental treatment that the patient can only undergo in a study setting, or when patients have no treatment preference. However, it frequently happens that surgical RCTs cannot meet these conditions, for instance, because the treatments to be evaluated have very different characteristics or both are already used in daily practice. Research shows that less than half of the number of RCTs in the Netherlands achieved the target number of participants by the planned end date.
A systematic review of surgical RCTs' characteristics shows that study populations are usually small, with a median of 122 patients, and that initially calculated sample sizes are not always achieved. A recent report by Zorg Onderzoek Nederland and the Medical Sciences area of the Netherlands Organisation for Scientific Research (ZonMw) shows that 56 percent of Dutch care evaluations (usually RCTs) are delayed more than six months, 19 percent even more than two years, during the inclusion phase. Dominant patient treatment preferences play a major role in this. Treatment preferences complicate the recruitment process, sample sizes are not achieved within the specified time, and drop-out just after randomization is high. As a result, RCTs have insufficient power to provide convincing medical-scientific evidence.
Whether an intervention is effective can be answered in several ways.
In the pyramid of primary scientific evidence, the RCT comes first, immediately followed by the prospective (comparative) study. In prospective research, selected patients are followed over time through inclusion and exclusion criteria to evaluate the occurrence of a particular outcome. In observational research, however, no randomisation takes place and no intervention or action is imposed on patients. A disadvantage of this design is that there may be an imbalance between the treatment groups to be compared in terms of patient characteristics ('confounding by indication'). This should therefore be corrected for in the statistical analysis - as far as possible.
A hybrid form combining the RCT with prospective studies is the comprehensive cohort design (CCD). This combination design is composed of two cohorts: one randomized cohort, in which patients without a dominant treatment preference are included; and one observational prospective cohort, in which appropriate patients but with a treatment preference are included. A CCD is also referred to as a 'patient preference trial' in the literature. Even in a CCD, the same imbalance may occur between treatment groups in terms of patient characteristics in the prospective cohort section. The results of analyses on the randomised cohort are then pooled with multivariable-adjusted results from the observational cohort, in a similar way to a meta-analysis. Despite the relative unfamiliarity of the CCD, several clinical trials using this research design have been successfully conducted and completed. (Kuiper et al. 2023)