Can testing clinical significance reduce false positive rates in randomized controlled trials? A snap review

Abstract

Objective: The use of minimum clinically important difference in the hypothesis formulation for superiority trials is similar in principle to the concept of non-inferiority or equivalence trial. However, most clinical trials are analysed testing zero clinical difference. Since the minimum clinically important difference is pre-defined for power calculation, it is important to incorporate it in both the hypothesis testing and the interpretation of findings from clinical trials.

Results: We reviewed a set of 50 publications (25 with binary outcome, and 25 with survival time outcome). 20% of the 50 published trials that were statistically significant, were also clinically significant based on the minimum clinically important risk differences (or hazard ratio) used for their power calculations. This snap review seems to suggest that most published trials with statistically significant results were less likely to be clinically significant, which may partly explain the high false positive findings associated with findings from superiority trials. Furthermore, none of the reviewed publications explicitly used minimum clinically important difference in the interpretation of their findings. However, a systematic review is needed to critically appraise the impact of the current practice on false positive rate in published trials with significant findings.

Publication
BMC Research Notes 2017; 10:775

Related