Publication
Overview
In their impactful pre-registered meta-analysis, Chan and Albarracín1 aimed to determine the degree to which the public updates science-relevant misinformation following a correction. Based on an impressive 75 studies and 245 effect sizes, the authors conclude that “attempts to debunk science-relevant misinformation were, on average, not successful” (d = 0.11; P = 0.142; 95% confidence interval (CI) = −0.04 to 0.26), with the effect of corrections “smaller than those identified in all other areas” (for example, politics and health2). Here we show that the reported null effect was due to the inappropriate pooling of two distinct effect types into a single estimate. This clarification is necessary because meta-analyses are often perceived as the gold standard of evidence, and numerous papers have cited Chan and Albarracín



