Skip to main content

AI-Paraphrasing Increases Perceptions of Social Consensus & Belief in False Information

Large Language Models (LLMs) have the potential to enhance message features and exploit cognitive heuristics to increase the persuasiveness of strategic information campaigns. In this paper, we focus on one such heuristic cue --- repetition, which is known to increase belief through the illusory truth effect. We investigate repetition by extracting verbatim repetitive messaging (CopyPasta) from recent U.S. disinformation campaigns. After using an LLM to rewrite the CopyPasta messages, we show that AI-paraphrased messages (AIPasta) are lexically diverse in comparison to their CopyPasta counterparts while retaining the semantics of the original message. In a preregistered experiment comparing the persuasive effects of CopyPasta and AIPasta (N = 1,200, U.S. nationally representative sample), we find that AIPasta (versus control) increases perceptions of social consensus for false claims, particularly among participants less familiar with the false narrative. Among Republican participants, AIPasta also increases belief in false claims as compared to the control group. Additionally, AIPasta (versus control) increases sharing intention of the disinformation messages among participants, as well as the degree to which the broader false claims are recalled. Notably, we find that CopyPasta does not demonstrate the same level of persuasiveness across these dimensions. Broadly, our findings suggest that generative AI tools have the potential to amplify or increase the effectiveness of strategic information campaigns through strategies like non-verbatim repetition. Results have implications for the detection and mitigation of harmful and false information.