Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

The statistical analysis and reporting of treatment effects in reports of randomised trials with a binary primary endpoint requires substantial improvement, suggests NDORMS research published in BMC Medicine.

Evaluation checklist © Shutterstock

 

Clinical trials help doctors decide between different treatments and ensure that patients receive the best care possible. Good decision-making is only possible if these trials are done correctly and if doctors have access to all of the relevant information about these trials in their resulting academic publications.

A research team from the Oxford Clinical Trials Research Unit (OCTRU) and Centre for Statistics in Medicine, NDORMS, has found evidence that trials with binary outcomes are particularly badly reported. Around half of all clinical trials use binary outcomes. These trials look at two possible outcomes rather than a continuum, such as failure or success of a treatment, or whether patients are dead or alive at the end of the trial.

The researchers looked at 200 academic publications of clinical trials published in January 2019 that used a binary outcome as the main finding of the study. They examined the statistical methods used to analyse the data and whether the publication merely assessed if there was a difference between the treatment groups or included details of how big this difference was and how it was calculated. The researchers also measured how much missing data there was in each study and how it was considered in the analysis.

The team discovered that these clinical trials routinely did not follow best practice. Most of the studies compared their treatment groups using a statistical test that can only measure whether there is a difference, not how big it is. Almost half of the studies did not estimate how differently the two treatments had performed or how much uncertainty there was around these estimates. Many studies did not explain how complete their data was, and very few looked into how any missing data affected the final results.

"Far from being a niche statistical issue, our work demonstrates a striking and worrying failure in the statistical analysis and reporting of a large number of clinical trials," explained Professor Jonathan Cook, who initiated the study. "This failure to analyse the main outcome of interest appropriately and to present the main finding of the study in an accessible way undermines the value of the study and will lead to avoidable misinterpretation – and could lead to unnecessary patient harm."

The research team hopes to use their work to raise awareness of the limitations of current practice and the need to interpret results cautiously. They also hope to contribute towards methodology and reporting guidelines to improve the quality of future binary-outcome clinical trials.

"Current practice needs to improve," concluded Prof Cook, who believes that treatment effects should be reported clearly in all papers. "Perhaps many think it is ok to leave information out of their reports as the treatment effect is 'obvious', but this leaves the reader with work to do. This isn't acceptable to me, and we as researchers have a responsibility to up our game."