Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.

The design of randomised controlled trials (RCTs) should incorporate characteristics (such as concealment of randomised allocation and blinding of participants and personnel) that avoid biases resulting from lack of comparability of the intervention and control groups. Empirical evidence suggests that the absence of such characteristics leads to biased intervention effect estimates, but the findings of different studies are not consistent.To examine the influence of unclear or inadequate random sequence generation and allocation concealment, and unclear or absent double blinding, on intervention effect estimates and between-trial heterogeneity, and whether or not these influences vary with type of clinical area, intervention, comparison and outcome measure.Data were combined from seven contributing meta-epidemiological studies (collections of meta-analyses in which trial characteristics are assessed and results recorded). The resulting database was used to identify and remove overlapping meta-analyses. Outcomes were coded such that odds ratios < 1 correspond to beneficial intervention effects. Outcome measures were classified as mortality, other objective or subjective. We examined agreement between assessments of trial characteristics in trials assessed in more than one contributing study. We used hierarchical Bayesian bias models to estimate the effect of trial characteristics on average bias [quantified as ratios of odds ratios (RORs) with 95% credible intervals (CrIs) comparing trials with and without a characteristic] and in increasing between-trial heterogeneity.The analysis data set contained 1973 trials included in 234 meta-analyses. Median kappa statistics for agreement between assessments of trial characteristics were: sequence generation 0.60, allocation concealment 0.58 and blinding 0.87. Intervention effect estimates were exaggerated by an average 11% in trials with inadequate or unclear (compared with adequate) sequence generation (ROR 0.89, 95% CrI 0.82 to 0.96); between-trial heterogeneity was higher among such trials. Bias associated with inadequate or unclear sequence generation was greatest for subjective outcomes (ROR 0.83, 95% CrI 0.74 to 0.94) and the increase in heterogeneity was greatest for such outcomes [standard deviation (SD) 0.20, 95% CrI 0.03 to 0.32]. The effect of inadequate or unclear (compared with adequate) allocation concealment was greatest among meta-analyses with a subjectively assessed outcome intervention effect (ROR 0.85, 95% CrI 0.75 to 0.95), and the increase in between-trial heterogeneity was also greatest for such outcomes (SD 0.20, 95% CrI 0.02 to 0.33). Lack of, or unclear, double blinding (compared with double blinding) was associated with an average 13% exaggeration of intervention effects (ROR 0.87, 95% CrI 0.79 to 0.96), and between-trial heterogeneity was increased for such studies (SD 0.14, 95% CrI 0.02 to 0.30). Average bias (ROR 0.78, 95% CrI 0.65 to 0.92) and between-trial heterogeneity (SD 0.37, 95% CrI 0.19 to 0.53) were greatest for meta-analyses assessing subjective outcomes. Among meta-analyses with subjectively assessed outcomes, the effect of lack of blinding appeared greater than the effect of inadequate or unclear sequence generation or allocation concealment.Bias associated with specific reported study design characteristics leads to exaggeration of beneficial intervention effect estimates and increases in between-trial heterogeneity. For each of the three characteristics assessed, these effects were greatest for subjectively assessed outcomes. Assessments of the risk of bias in RCTs should account for these findings. Further research is needed to understand the effects of attrition bias, as well as the relative importance of blinding of patients, care-givers and outcome assessors, and thus separate the effects of performance and detection bias.National Institute for Health Research Health Technology Assessment programme.

Original publication




Journal article


Health technology assessment (Winchester, England)

Publication Date





1 - 82


School of Social and Community Medicine, University of Bristol, Bristol, UK.


Humans, Data Interpretation, Statistical, Epidemiologic Research Design, Reproducibility of Results, Bias (Epidemiology), Databases, Bibliographic, Outcome Assessment (Health Care), Meta-Analysis as Topic, Randomized Controlled Trials as Topic