Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.

OBJECTIVES: When developing a clinical prediction model, penalisation techniques are recommended to address overfitting, as they shrink predictor effect estimates towards the null and reduce mean-square prediction error in new individuals. However, shrinkage and penalty terms ('tuning parameters') are estimated with uncertainty from the development dataset. We examined the magnitude of this uncertainty and the subsequent impact on prediction model performance. STUDY DESIGN AND SETTING: Applied examples and a simulation study of the following methods: uniform shrinkage (estimated via a closed-form solution or bootstrapping), ridge regression, the lasso, and elastic net RESULTS: In a particular model development dataset, penalisation methods can be unreliable because tuning parameters are estimated with large uncertainty. This is of most concern when development datasets have a small effective sample size and the model's Cox-Snell R2 is low. The problem can lead to considerable miscalibration of model predictions in new individuals. CONCLUSIONS: Penalisation methods are not a 'carte blanche'; they do not guarantee a reliable prediction model is developed. They are more unreliable when needed most (i.e. when overfitting may be large). We recommend they are best applied with large effective sample sizes, as identified from recent sample size calculations that aim to minimise the potential for model overfitting and precisely estimate key parameters.

Original publication

DOI

10.1016/j.jclinepi.2020.12.005

Type

Journal article

Journal

J clin epidemiol

Publication Date

08/12/2020

Keywords

overfitting, penalisation, risk prediction models, sample size, shrinkage