Do peer reviewers comment on reporting items as instructed by the journal? A secondary analysis of two randomised trials.
Ramirez HW., Chiaborelli M., Schönenberger CM., Mellor K., Griessbach AN., Dhiman P., Gandhi P., Lohner S., Agarwal A., Odutayo A., Schlussel MM., Ravaud P., Moher D., Briel M., Boutron I., Hopewell S., Schroter S., Speich B.
OBJECTIVES: Two studies randomising manuscripts submitted to biomedical journals have previously shown that reminding peer reviewers about key reporting items did not improve the reporting quality in published articles. Within this secondary analysis of peer reviewer reports we aimed to assess at what stage the intervention failed. STUDY DESIGN AND SETTING: We exploratively analysed peer reviewer reports from two published randomised controlled trials (RCTs) conducted at biomedical journals. The first RCT (CONSORT-PR) assessed adherence to the Consolidated Standards of Reporting Trials (CONSORT) guideline in manuscripts presenting primary RCT results. The second RCT (SPIRIT-PR) included manuscripts presenting RCT protocols and assessed adherence to the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) guideline. In both RCTs the control group consisted of peer reviewers receiving no reminder, while all reviewers in the intervention group received a reminder of the 10 most important reporting items. For this secondary analysis two independent, blinded authors extracted from peer reviewer reports which of the ten most important reporting items were mentioned by peer reviewers for clarification. The main outcome of this secondary analysis was the difference in the mean proportion of these ten reporting items for which at least one peer reviewer requested clarification. Furthermore, we assessed how this difference changed if (i) only published manuscripts were considered and (ii) when only considering requested changes that were implemented by authors. RESULTS: We assessed peer reviewer reports from 533 manuscripts (n=265 intervention group; n=268 control group). Among the manuscripts in the intervention group, 21.1% (95% CI, 18.6-23.6%) of the ten reporting items were requested for clarification, compared to 13.1% (95% CI, 18.6-23.6%) in the control group, resulting in a mean difference of 8.0% (95% CI, 4.9-11.1%). However, this difference diminished to 4.2% when assessing solely accepted and published manuscripts and was even further reduced to 2.6% when accounting for changes actually implemented by authors. CONCLUSION: Reminding peer reviewers to check reporting items increased their focus on reporting guidelines, leading to more reporting-related requests in their reviews. However, the effect was strongly diluted during the peer review process due to rejected articles and requests not implemented by authors.