Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.

The modelling and approach to tackle the hard medical decisions associated with the spread of the COVID-19 virus may be based on weak and overly-optimistic evidence from studies that are biased and unreliable, suggests research published by The BMJ today.

Coronavirus modelling © Shutterstock

The COVID-19 pandemic is considered to pose a significant and urgent threat to global health with numbers of cases and deaths rising steadily in many countries.

Since the outbreak began in December of last year, it has put a strain on healthcare systems and focused the need for efficient early detection, diagnosis of patients suspected of the infection, and prognosis for confirmed cases.

The current viral nucleic acid testing and chest computed tomography (CT) are standard methods for diagnosing COVID-19, but are time-consuming.

Gary Collins, professor of medical statistics at NDORMS was part of a team of European researchers from Maastricht University, KU Leuven, University Medical Center Utrecht, Medical University of Vienna, Keele University, and Leiden University that, in collaboration with the Cochrane Prognosis Methods group (which includes Professor Collins), examined multiple studies on the virus.

They set out to identify, review and appraise prediction models for diagnosis and prognosis of COVID-19 infection from published and pre-print reports, and found most were poorly reported, were at high risk of bias and included recommendations that were questionable should they be put into practice.

"The models we evaluated aimed to predict either the presence of existing COVID-19 infection, future complications including mortality in individuals already diagnosed, or models to identify individuals at high risk for COVID-19 in the general population," said Gary. "We recognise that in these challenging times there is very little data is available, but almost all of those we studied in our review indicate proposed models are poorly reported and at high risk of bias. Their reported performance is likely optimistic and using them to support medical decision making is not advised."

The study team focused on 27 studies that described 31 prediction models. The vast majority (25) of studies used data on COVID-19 cases from China, one study used data on Italian cases, and one study used international data (among others, United States, United Kingdom and China). Collectively, data were gathered between 8 December 2019 and 15 March 2020.

The researchers' analysis identified three models to predict hospital admission from pneumonia and other events (as a substitution for COVID-19 pneumonia) in the general population, as well as 18 diagnostic models to detect COVID-19 infection in symptomatic individuals, 13 of which were machine learning utilising CT results.

In addition, they identified 10 prognostic models for predicting mortality risk, a person's progression to a severe state, or length of hospital stay.

The researchers found that all the studies they analysed were rated as having a high risk of bias, mostly because:
• they had a non-representative selection of control patients
• they excluded patients who were still ill at the end of the study
• they had poor statistical analysis.

The quality of reporting in the studies varied substantially and a description of the study population and intended use of the models was absent in almost all reports with calibration of predictions rarely being assessed.

The speed at which prediction models for COVID-19 were being produced raised concern that the "models may be flawed and perform poorly when applied in practice, such that their predictions may be unreliable".

They recommend immediate sharing of the individual participant data from COVID-19 studies in order to support collaborative efforts in building "more rigorously developed prediction models" and evaluating existing models. However, whilst the use of individual models included in the review is not advised, the review identified a number of predictors that were frequently included in individual models that should be considered when researchers develop new models.

"We also stress the need to follow methodological guidance when developing and validating prediction models, ideally involving prediction model experts, as unreliable predictions may cause more harm than benefit when used to guide clinical decisions," concluded Gary.

The researchers will continue updating the evidence, in a so-called living review to ensure health care professionals have the most recent information about availability and quality of diagnostic and prognostic models.