Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.
Skip to main content

OBJECTIVE: To compare performance of logistic regression (LR) with machine learning (ML) for clinical prediction modeling. STUDY DESIGN AND SETTING: We conducted a Medline literature search (1/2016 to 8/2017), and extracted comparisons between LR and ML models for binary outcomes. RESULTS: We included 71 out of 927 studies. The median sample size was 1250 (range 72-3,994,872), with 19 predictors considered (range 5-563) and 8 events per predictor (range 0.3-6,697). The most common ML methods were classification trees (30 studies), random forests (28), artificial neural networks (26), and support vector machines (24). Sixty-four (90%) studies used the area under the receiver operating characteristic curve (AUC) to assess discrimination. Calibration was not addressed in 56 (79%) studies. We identified 282 comparisons between a LR and ML model (AUC range, 0.52-0.99). For 145 comparisons at low risk of bias, the difference in logit(AUC) between LR and ML was 0.00 (95% confidence interval, -0.18 to 0.18). For 137 comparisons at high risk of bias, logit(AUC) was 0.34 (0.20 to 0.47) higher for ML. CONCLUSIONS: We found no evidence of superior performance of ML over LR for clinical prediction modeling, but improvements in methodology and reporting are needed for studies that compare modeling algorithms.

Original publication

DOI

10.1016/j.jclinepi.2019.02.004

Type

Journal article

Journal

J clin epidemiol

Publication Date

11/02/2019

Keywords

AUC, Clinical prediction models, calibration, logistic regression, machine learning, reporting