Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

The PROBAST+AI guidelines have been updated to provide clearer, more comprehensive standards for evaluating artificial intelligence (AI) models in healthcare research. Originally introduced to assess the risk of bias and the robustness of prediction models, they have been revised to address the growing importance of AI in healthcare decision-making.

Patients are set to benefit from new guidelines on Artificial Intelligence healthcare solutions

The PROBAST (Prediction model Risk Of Bias ASsessment Tool) guidelines were first published in 2019 to evaluate the quality and sources of bias of prediction models in healthcare research i.e. those that estimate the probability of a health outcome for individuals. Since that time, the use of artificial intelligence (AI) and machine learning techniques have become more widespread resulting in a need for new guidance for studies that incorporate these technologies.

'As AI becomes more widely used in medical decision-making, it's vital that researchers have the tools to critically appraise them and the potential biases or limitations of these models,' said Gary Collins, Professor of Medical Statistics at NDORMS, University of Oxford. 'The original PROBAST framework was an important step, but we recognised the need for additional guidance to address the unique challenges of AI research.'

Published in the BMJ, the PROBAST+AI guidelines were developed by an international working group consisting of experts in prediction model research, artificial intelligence, and systematic reviews.

The key features of the updated guidelines include:

  • Comprehensive Assessment Criteria: The guidelines expand on the original framework, providing detailed criteria for evaluating AI models. This includes assessments of data quality, model development, and validation processes.
  • Focus on Bias and Fairness: Recognising that bias can lead to unequal healthcare outcomes, PROBAST+AI emphasises the need to identify and mitigate biases in AI systems, which includes a thorough evaluation of the data sets used to train AI models.
  • Inclusion of Stakeholder Perspectives: The guidelines encourage researchers to involve diverse stakeholders, including patients and healthcare providers, in the development and evaluation of AI models. This collaborative approach helps ensure that the models meet real-world needs.
  • Reduce research waste: The framework can be used to guide the design and analysis of a predictive AI study. PROBAST+AI aligns with the TRIPOD+AI reporting guideline to improve the accuracy, effectiveness, generalisability, and appropriate use of AI models.
  • Emphasis on Real-World Impact: The guidelines stress the importance of assessing how AI models perform in actual clinical settings, rather than just in controlled environments. This focus on practicality aims to ensure that AI tools align with intended use, are effective and beneficial in day-to-day healthcare.

The PROBAST+AI guidelines are expected to have a significant impact on how AI research in healthcare is conducted and reported. This will ensure they meet high standards of validity, fairness, and applicability to ultimately build greater trust in the use of AI to support clinical decision-making.