Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

The widespread use of artificial intelligence (AI) in medical decision-making tools has led to an update of the TRIPOD guidelines for reporting clinical prediction models. The new TRIPOD+AI guidelines are launched in the BMJ today.

TRIPOD+AI logo

The TRIPOD guidelines (which stands for Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis Or Diagnosis) were developed in 2015 to improve tools to aid diagnosis and prognosis that are used by doctors. Widely used, their uptake by medical practitioners to estimate the probability that a specific condition is present or may occur in the future, has helped improve transparency and accuracy of decision-making and significantly improve patient care.

But research methods have moved on since 2015, and we are witnessing an acceleration of studies that are developing prediction models using AI, specifically machine learning methods. Transparency is one of the six core principles underpinning the WHO guidance on ethics and governance of artificial intelligence for health. TRIPOD+AI has therefore been developed to provide a framework and set of reporting standards to boost reporting of studies developing and evaluating AI prediction models regardless of the modelling approach.

The TRIPOD+AI guidelines were developed by a consortium of international investigators, led by researchers from the University of Oxford alongside researchers from other leading institutions across the world, healthcare professionals, industry, regulators, and journal editors. The development of the new guidance was informed by research highlighting poor and incomplete reporting of AI studies, a Delphi survey, and an online consensus meeting.

Gary Collins, Professor of Medical Statistics at the Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences (NDORMS), University of Oxford, and lead researcher in TRIPOD, says: ‘There is enormous potential for artificial intelligence to improve healthcare from earlier diagnosis of patients with lung cancer to identifying people at increased risk of heart attacks. We’re only just starting to see how this technology can be used to improve patient outcomes. Deciding whether to adopt these tools is predicated on transparent reporting. Transparency enables errors to be identified, facilitates appraisal of methods and ensures effective oversight and regulation. Transparency can also create more trust and influence patient and public acceptability of the use of prediction models in healthcare.’

The TRIPOD+AI statement consists of a 27-item checklist that supersedes TRIPOD 2015. The checklist details reporting recommendations for each item and is designed to help researchers, peer reviewers, editors, policymakers and patients understand and evaluate the quality of the study methods and findings of AI-driven research.

A key change in TRIPOD+AI has been an increased emphasis on trustworthiness and fairness. Prof. Carl Moons, UMC Utrecht said: ‘While these are not new concepts in prediction modelling, AI has drawn more attention to these as reporting issues. A reason for this is that many AI algorithms are developed on very specific data sets that are sometimes not even from studies or could simply be drawn from the internet.  We also don’t know which groups or subgroups were included. So to ensure that studies do not discriminate against any particular group or create inequalities in healthcare provision, and to ensure decision-makers can trust the source of the data, these factors become more important.’

Dr Xiaoxuan Liu and Prof Alastair Denniston, Directors of the NIHR Incubator for Regulatory Science in AI & Digital Healthcare are co-authors of TRIPOD+AI. Many of the most important applications of AI in medicine are based on prediction models. We were delighted to support the development of TRIPOD+AI which is designed to improve the quality of evidence in this important area of AI research.’

TRIPOD 2015 helped change the landscape of clinical research reporting bringing minimum reporting standards to prediction models. The original guidelines have been cited over 7500 times, featured in multiple journal instructions to authors, and been included in WHO and NICE briefing documents.

‘I hope the TRIPOD+AI will lead to a marked improvement in reporting, reduce waste from incompletely reported research and enable stakeholders to arrive at an informed judgement based on full details on the potential of the AI technology to improve patient care and outcomes that cut through the hype in AI-driven healthcare innovations,’ concluded Gary.