Systematic review finds "Spin" practices and poor reporting standards in studies on machine learning-based prediction models.
Andaur Navarro CL., Damen JA., Takada T., Nijman SWJ., Dhiman P., Ma J., Collins GS., Bajpai R., Riley RD., Moons KG., Hooft L.
OBJECTIVE: We evaluated the presence and frequency of spin practices and poor reporting standards in studies that developed and/or validated clinical prediction models using supervised machine learning techniques. STUDY DESIGN AND SETTING: We systematically searched PubMed from 01-2018 to 12-2019 to identify diagnostic and prognostic prediction model studies using supervised machine learning. No restrictions were placed on data source, outcome, or clinical specialty. RESULTS: We included 152 studies: 38% reported diagnostic models and 62% prognostic models. When reported, discrimination was described without precision estimates in 53/71 abstracts (74.6%, [95% CI 63.4 - 83.3]) and 53/81 main texts (65.4%, [95% CI 54.6 - 74.9]). Of the 21 abstracts that recommended the model to be used in daily practice, 20 (95.2% [95% CI 77.3 - 99.8]) lacked any external validation of the developed models. Likewise, 74/133 (55.6% [95% CI 47.2 - 63.8]) studies made recommendations for clinical use in their main text without any external validation. Reporting guidelines were cited in 13/152 (8.6% [95% CI 5.1 - 14.1]) studies. CONCLUSION: Spin practices and poor reporting standards are also present in studies on prediction models using machine learning techniques. A tailored framework for the identification of spin will enhance the sound reporting of prediction model studies.