Gary Collins
PhD
Honorary departmental professor
RESEARCH
Gary's research interests are primarily focused on methodological aspects surrounding the development and evaluation multivariable prediction models and he has published extensively in this area.
Gary led an international initiative to develop guidance for studies using artificial intelligence and machine learning (TRIPOD+AI). He has also been involved in other guidance for reporting artificial intelligence/machine learning studies including CONSORT-AI / SPIRIT-AI (for reporting AI intervention studies), STARD-AI (for reporting AI based diagnostic test accuracy studies), and DECIDE-AI (bridging the development-implementation gap). He is also involved in developed risk of bias tools for machine learning diagnostic test accuracy studies (QUADAS-AI) and prediction model studies (PROBAST+AI). Gary is also involved with colleagues from the University of Southern California developing guidance for the responsible use of large language models such as ChatGPT for research (the CANGARU guidelines) and with colleagues from McMaster University for developing reporting guidance on studies evaluating chatbots for providing medical advice (CHART guideline).
Gary was involved in the development of numerous other reporting guidelines including the GATHER statement for reporting global health estimates (published in the Lancet and PLoS Medicine), and the AGReMA statement for reporting mediation analyses, published in JAMA. More recently he was involved in updating the SPIRIT and CONSORT guidelines for clinical trials. Gary is also a steering group member of the international STRATOS Initiative, which aims to provide accessible and accurate guidance in the design and analysis of observational studies, and currently sits on the external advisory board for the Centre for Open Science Transparency and Openness Promotion Guidelines
Key publications
-
TRIPOD+AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods.
Journal article
Collins GS. et al, (2024), Bmj, 385
-
Evaluation of clinical prediction models (part 1): from development to external validation.
Journal article
Collins GS. et al, (2024), Bmj, 384
-
Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): explanation and elaboration.
Journal article
Moons KGM. et al, (2015), Ann intern med, 162, W1 - 73
-
A systematic review shows no performance benefit of machine learning over logistic regression for clinical prediction models.
Journal article
Christodoulou E. et al, (2019), J clin epidemiol, 110, 12 - 22
-
Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies.
Journal article
Nagendran M. et al, (2020), Bmj, 368
-
Critical appraisal and data extraction for systematic reviews of prediction modelling studies: the CHARMS checklist.
Journal article
Moons KGM. et al, (2014), Plos med, 11
-
A Guideline for Reporting Mediation Analyses of Randomized Trials and Observational Studies: The AGReMA Statement.
Journal article
Lee H. et al, (2021), Jama, 326, 1045 - 1056
-
Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI.
Journal article
Vasey B. et al, (2022), Nat med, 28, 924 - 933
-
Guidelines for Accurate and Transparent Health Estimates Reporting: the GATHER statement.
Journal article
Stevens GA. et al, (2016), Lancet, 388, e19 - e23
-
CONSORT 2025 statement: updated guideline for reporting randomised trials.
Journal article
Hopewell S. et al, (2025), Bmj, 389