Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

BACKGROUND: With the move to competency-based models of surgical training, a number of assessment methods have been developed. Of these, global rating scales have emerged as popular tools, and several are specific to the assessment of arthroscopic skills. Our aim was to determine which one of a group of commonly used global rating scales demonstrated superiority in the assessment of simulated arthroscopic skills. METHODS: Sixty-three individuals of varying surgical experience performed a number of arthroscopic tasks on a virtual reality simulator (VirtaMed ArthroS). Performance was blindly assessed by two observers using three commonly used global rating scales used to assess simulated skills. Performance was also assessed by validated objective motion analysis. RESULTS: All of the global rating scales demonstrated construct validity, with significant differences between each skill level and each arthroscopic task (p < 0.002, Mann-Whitney U test). Interrater reliability was excellent for each global rating scale. Correlations of global rating scale ratings with motion analysis were high and strong for each global rating scale when correlated with time taken (Spearman rho, -0.95 to -0.76; p < 0.001), and correlation with total path length was significant and moderately strong (Spearman rho, -0.94 to -0.64; p < 0.001). CONCLUSIONS: No single global rating scale demonstrated superiority as an assessment tool. CLINICAL RELEVANCE: For these commonly used arthroscopic global rating scales, none was particularly superior and any one score could therefore be used. Agreement on using a single score seems sensible, and it would seem unnecessary to develop further scales with the same domains for these purposes.

Original publication

DOI

10.2106/JBJS.O.00434

Type

Journal article

Journal

J bone joint surg am

Publication Date

06/01/2016

Volume

98

Pages

75 - 81

Keywords

Arthroscopy, Clinical Competence, Competency-Based Education, Computer Simulation, Hospitals, University, Humans, Knee Joint, Models, Anatomic, Observer Variation, Orthopedics, Reproducibility of Results, Statistics, Nonparametric, Task Performance and Analysis, User-Computer Interface