The limitations of randomized controlled trials

Daniel Weber PhD, MSc. 01/03/2012

History

James Lind undertook the first comparative clinical trial in history in 1747, in the treatment of scurvy. Following this trial, Claude Bernard published the general bases of modern experimental medicine in 1865. However, it is the development of new drugs and the evolution of methodological concepts that led to the first randomized controlled clinical trial, in 1948, which showed that the effects of streptomycin on pulmonary tuberculosis were significantly different from those of a placebo.1 Randomized, controlled trials were introduced into clinical medicine when streptomycin was evaluated in the treatment of tuberculosis and have become the gold standard for assessing the effectiveness of therapeutic agents.8 Today, “evidence-based” medicine aims to rationalize the medical decision-making process by taking into account, first and foremost, the results of controlled randomized clinical trials, which provide the highest level of evidence. In the second half of the 20th century it became clear that different kinds of clinical trials might not provide the same level of evidence. Practitioners’ intimate convictions must be challenged by the results of controlled clinical trials. For example, in the 1989 Cardiac Arrhythmia Suppression Trial (CAST) trial, antiarrhythmic drugs were tested versus a placebo in patients with myocardial infarction. It was well known that ventricular arrhythmias were a factor of poor prognosis in coronary heart disease, and it was therefore considered self-evident that drug suppression of these ventricular arrhythmias would reduce the mortality rate. In the trial, the antiarrhythmic showed the exact opposite, with an almost 3-fold increase in total mortality among patients with coronary heart disease. These results had a profound impact on the use of antiarrhythmic drugs, which became contraindicated after myocardial infarction.1

 

Requirements of clinical trials and advantages of randomized controlled trials

A clinical trial must fulfill certain methodological standards to be accepted as evidence-based medicine.

First, a working hypothesis must be formulated, and then the primary outcome measure must be chosen before beginning the study. An appropriate major endpoint for efficacy must be selected, in keeping with the primary outcome. One may choose either a single endpoint (for instance all-cause mortality; or a composite criterion taking into account various manifestations of the same health disorder (for instance cardiovascular mortality plus non-lethal myocardial infarction plus non-lethal ischemic stroke).

The trial must be controlled, i.e. must compare the intervention with a standard or dummy treatment. A randomization process is used to ensure that the groups are comparable.1 Protection from selection bias is provided by secure random allocation, using telephone or computer-based randomization, and by analysis based on the groups as allocated, thus ensuring that groups being compared differ only by chance. Performance bias can be minimized by blinding treatments (when possible) and by employing clearly described treatment policies. Detection bias may be avoided by blind outcome assessment and attrition bias by ensuring follow-up of all patients randomised.2

The patients must be monitored and the results analyzed in a double-blind manner.The required number of patients is calculated based on the working hypothesis (“superiority” trial or “equivalence” trial), as well as the spontaneous variability of the main endpoint, and the alpha and beta statistical risks.1 Pre-study sample size calculations should always be made and funding bodies, independent protocol review bodies and journal editors should all demand them. A sensitivity analysis should be considered, with indicative estimates rather than unrealistically precise numbers. Small trials should be reported as hypothesis forming.2

The experimental design (cross-over or parallel groups) is chosen according to the primary outcome measure and the disease characteristics.1 The most frequent choice of study design is between a parallel group and a crossover design. Using a factorial design of randomized controlled experiments.2 efficiently approaches simultaneous investigations of two or more treatments 

Finally, the results must be analyzed in an intention-to-treat manner, taking into account all the patients who were initially randomized. The results of these methodologically sound trials form the basis for official therapeutic guidelines, which help physicians to choose the best treatments for their patients.1 Based on these factors, the randomized controlled trial is hence currently considered the most powerful research tool for evaluating health technologies; yet, it suffers numerous disadvantages.

 

Disadvantages of randomized controlled trials

Despite the significant advantages of randomized controlled trials, there are a number of factors limiting the quality, number and progress of RCTs. There are a number of issues to be considered, such as those of design, barriers to participation, conduct and structure, analysis, reporting and costs.

The design of randomized controlled trials should follow a systematic review of existing evidence, resulting in a well-formulated question being developed, specifying participants, interventions and outcomes. Wide patient eligibility criteria are generally preferred to give representativeness of the wider patient population and good recruitment rates to the trial. However, a more homogeneous group may be preferable when evaluating expensive or hazardous interventions. Outcome measures need to be clinically and socially relevant, well defined, valid, reliable, sensitive to important change and measured at appropriate times. There is evidence that the use of intermediate or surrogate outcomes has been misleading.2 In particular, it remains an empirical issue whether RCT-based research can inform mental health policy. Without major design innovations, it is more likely that the information generated by this research will have limited practical use, especially if the RCT model is unable to control for the effect of social complexity and the interaction between social complexity and dynamic systemic change.4

Several barriers exist to both clinician and patient participation. Barriers to clinician participation include: time constraints, lack of staff and training, concern about the impact on doctor-patient relationships, concern for patients, loss of professional autonomy, difficulty with consent procedures, lack of reward and recognition, and an insufficiently interesting question. To overcome barriers to clinician recruitment, a trial should address an important research question and the protocol and data collection should be as straightforward as possible, with demands on clinicians and patients kept to a minimum. In addition, there are barriers to patient participation, which include: additional demands of the trial, patient preferences, concern caused by uncertainty and concerns about information and consent.2 Due to limitations in patient participation, extrapolating the results of randomized controlled clinical trials to the general patient population is not always straightforward. For instance, it is well known that patients who participate in clinical trials are highly selected and therefore somewhat unrepresentative. In addition, their numbers are limited and the treatment period is often much shorter than in routine management of a chronic disease. Finally, patients in clinical trials are monitored more closely than in routine practice. Hence, post-marketing pharmacoepidemiological studies are required, in which cohorts of patients exposed to the treatment in question are monitored sufficiently long to determine the precise risk-benefit ratio.1 Dedicated research staff may be required to support clinical staff and patients. The recruitment aspects of an RCT should be carefully planned and piloted.2

Logistically, many trials fail to begin due to a lack of funding or other logistical problems. Economic evaluations are reported in few randomized controlled studies, possibly because of difficulties in conducting such evaluations and the lack of ability to generalize from one healthcare context to another. Some components of an economic analysis are subject to uncertainty; statistical tests and confidence intervals should, therefore, be used. There has been little research into trial costs but costs of caring for patients in randomized controlled studies may be perceived as an unaffordable new service, delaying or preventing recruitment at some participating centers. Of those that start, half have recruitment difficulties, leading to abandonment or reduced size and hence loss of statistical power. Recruitment problems may be reduced by piloting, using multiple recruitment strategies, making contingency plans incase recruitment is slow and using recruitment coordinators; though none of these approaches has been rigorously evaluated. 2

Inadequate compliance with the study protocol can lead to false-negative or false-positive results. Some assessment of compliance (clinician and participant) should be made but may be difficult to measure.

Quality control is important but too much may make randomized controlled trials prohibitively expensive and hinder recruitment. Trials need good organizational and administrative bases but there is little research evaluating the optimal structure. The precise roles of steering committees and data monitoring committees have been poorly evaluated. There is concern about bias in the design, conduct, analysis and reporting of commercially sponsored trials, and independent monitoring should be considered.2

The reporting of randomized controlled trials also requires improvements through the introduction of the Consolidation of Standards for Reporting Trials (CONSORT) guidelines. Conclusions should be supported by the data presented. About 10% of trials remain unpublished while many others are only published in conference proceedings, particularly if they are small and show non-significant treatment effects. Conversely, multiple publications of a study are also problematic for studies showing significant results. Prospective registration of all randomized controlled trials is recommended.2

Although highly successful in investigating remedial therapy, randomized clinical trials have sometimes created rather than clarified controversy when the treatments were given for the complex problems involved in studying either the primary prevention of disease or the secondary prevention of adverse progression for an established disease. Consequently, despite the magnificent scientific achievements of randomized clinical trials, the foundation for a basic science of patient care will also require major attention to the events and observations that occur in the ordinary circumstances of clinical practice.3 After some years of being largely dismissed in the ranking of evidence in medicine, alternatives to the randomized controlled trial have been debated recently in public health and related population and social service fields to identify the trade-offs in their use when randomization is impractical or unethical.5

Evidence-based medicine is a shift in medical paradigms and is about solving clinical problems, acknowledging that intuition, unsystematic clinical experience, and pathophysiologic rationale are insufficient grounds for clinical decision-making. Though randomized controlled trials (RCTs) has been positioned as the top of the hierarchy, some have criticized that the hierarchy of evidence has done nothing more than glorify the results of imperfect experimental designs on unrepresentative populations in controlled research environments above all other sources of evidence that may be equally valid or far more applicable in given clinical circumstances. Design, implementation, and reporting of randomized trials are crucial. The biased interpretation of results from randomized trials, either in favor of or opposed to a treatment, and lack of proper understanding of randomized trials, leads to a poor appraisal of the quality. Multiple types of controlled trials include placebo-controlled and pragmatic trials. Placebo-controlled RCTs have multiple shortcomings such as cost and length, which limit the availability for studying certain outcomes, and may suffer from problems of faulty implementation or poor generalizability, despite the study design which ultimately may not be the prime consideration when weighing evidence for treatment alternatives. However, in practical clinical trials, interventions compared in the trial are clinically relevant alternatives, participants reflect the underlying affected population with the disease, participants come from a heterogeneous group of practice settings and geographic locations, and endpoints of the trial reflect a broad range of meaningful clinical outcomes.6

 

Alternatives to randomized controlled trials

Evidence-based policy is a dominant theme in contemporary public services but the practical realities and challenges involved in using evidence in policy-making are formidable. Part of the problem is one of complexity. In health services and other public services, we are dealing with complex social interventions, which act on complex social systems–things like league tables, performance measures, regulation and inspection, or funding reforms. These are not ‘magic bullets’, which will always hit their target, but programs whose effects are crucially dependent on context and implementation. Traditional methods of review focus on measuring and reporting on program effectiveness, often find that the evidence is mixed or conflicting, and provide little or no clue as to why the intervention worked or did not work when applied in different contexts or circumstances, deployed by different stakeholders, or used for different purposes.4

The first step is to make explicit the program theory (or theories)–the underlying assumptions about how an intervention is meant to work and what impacts it is expected to have. We then look for empirical evidence to populate this theoretical framework, supporting, contradicting or modifying the program theories as it goes. The results of the review combine theoretical understanding and empirical evidence, and focus on explaining the relationship between the context in which the intervention is applied, the mechanisms by which it works and the outcomes which are produced. The aim is to enable decision-makers to reach a deeper understanding of the intervention and how it can be made to work most effectively. Realist review does not provide simple answers to complex questions. It will not tell policy-makers or managers whether something works or not, but will provide the policy and practice community with the kind of rich, detailed and highly practical understanding of complex social interventions which is likely to be of much more use to them when planning and implementing programs at a national, regional or local level.4

Observational studies have several advantages over randomized, controlled trials, including lower cost, greater timeliness, and a broader range of patients. Concern about inherent bias in these studies, however, has limited their use in comparing treatments. Observational studies are used primarily to identify risk factors and prognostic indicators and in situations in which randomized, controlled trials would be impossible or unethical. The empirical assessment of observational studies rests largely on a number of influential comparative studies from the 1970s and 1980s. These studies suggest that observational studies inflate positive treatment effects, as compared with randomized, controlled trials. Evaluations of observational studies have primarily included studies from the 1960s and 1970s. Possible methodological improvements include a more sophisticated choice of data sets and better statistical methods. Newer methods may have eliminated some systematic bias.7

 

Conclusion

Randomized, controlled trials will (and should) remain a prominent tool in clinical research, but the results of a single randomized, controlled trial, or of only one observational study, should be interpreted cautiously. If a randomized, controlled trial is later determined to have given wrong answers, evidence both from other trials and from well-designed cohort or case–control studies can and should be used to find the right answers. The popular belief that only randomized, controlled trials produce trustworthy results and that all observational studies are misleading does a disservice to patient care, clinical investigation, and the education of health care professionals.8

References

1. Jaillon P. Bull Acad Natl Med. 2007 Apr-May;191(4-5):739-56; discussion 756-8.

2. Prescott RJ, Counsell CE, Gillespie WJ, et al. Health Technology Assessment 1999; Vol. 3: No. 20

3. Feinstein AR. Annals of Internal medicine. October 1, 1983 vol. 99 no. 4 544-550

4. Pawson R, Greenhalgh T, Harvey G, Walshe K. J Health Serv Res Policy. 2005 Jul;10 Suppl 1:21-34.

5. Sanson-Fisher RW, et al. American Journal of Preventive Medicine. Volume 33, Issue 2, August 2007, Pages 155–161  http://dx.doi.org/10.1016/j.amepre.2007.04.007

6. Manchikanti L, Hirsch JA, Smith HS. Pain Physician. 2008 Nov-Dec;11(6):717-73.

7. Benson K, Hartz AJ. N Engl J Med 2000; 342:1878-1886June 22, 2000

8. Concato J. Nirav Shah N, Horwitz RI. N Engl J Med 2000; 342:1887-1892June 22, 2000

You can leave a response, or trackback from your own site.

Leave a Reply

You must be logged in to post a comment.