Abstract:
Social networks scaffold the diffusion of information on social media. Much attention has been given to the spread of true vs. false content on online social platforms, including the structural differences between their diffusion patterns. However, much less is known about how platform interventions on false content alter the engagement with and diffusion of such content. In this work, we estimate the causal effects of Community Notes, a novel fact-checking feature adopted by X (formerly Twitter) to solicit and vet crowd-sourced fact-checking notes for false content. We gather detailed time series data for 40,078 posts for which notes have been proposed and use synthetic control methods to estimate a range of counterfactual outcomes. We find that attaching fact-checking notes significantly reduces the engagement with and diffusion of false content. We estimate that, on average, the notes resulted in reductions of 46.1% in reposts, 44.1% in likes, 21.9% in replies, and 13.5% in views after being attached. Over the posts’ entire lifespans, these reductions amount to 11.6% fewer reposts, 13.3% fewer likes, 6.9% fewer replies, and 5.5% fewer views on average. In reducing reposts, we observe that diffusion cascades for fact-checked content are less deep and less “viral,” but not less broad, than synthetic control estimates for non-fact-checked content with similar reach. This structural difference contrasts notably with differences between false vs. true content diffusion itself, where false information diffuses farther, but with structural patterns that are otherwise indistinguishable from those of true information, conditional on reach.
Martin Saveski is an Assistant Professor in the University of Washington Information School. He received his Ph.D. from the Massachusetts Institute of Technology and was a Postdoctoral Scholar at Stanford University. His research develops tools for analyzing large-scale social data, aiming to provide a better understanding of social structure and behaviors online while also impacting the design of digital social systems. His recent work has focused on developing new feed ranking algorithms aimed at reducing political polarization and studying the effects of crowdsourced fact-checking. His work has appeared in general-audience journals, including Science and PNAS, and in computer science venues such as ICWSM, WWW, NeurIPS, and ICML. His research has also been featured in popular media outlets, including The New York Times, NPR, The Guardian, and the MIT Technology Review.
