ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Suppressing the Spread of Pro-Russian Narratives on the War in Ukraine Through Counter-Narratives

Cyber Politics
Political Psychology
Communication
Ethics
Field Experiments
Public Opinion
Tetsuro Kobayashi
Waseda University
Irad Ben Gal
Tel Aviv University
Sharva Gogawale
Tel Aviv University
Tetsuro Kobayashi
Waseda University
Carmel Kronfeld
Tel Aviv University

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

Pro-Russian narratives concerning the war in Ukraine have been widely disseminated across global social media platforms and have exerted influence on public opinion in democratic societies in the form of populist opposition to assistance for Ukraine as well as conspiracy theories. Because democratic states place a strong normative emphasis on freedom of expression, they are unable to uniformly censor such pro-Russian narratives, while at the same time content moderation on social media platforms has been weakening and government-led regulation of platforms faces clear structural and legal limitations. Against these backdrops, there is a growing need for effective means of countering the diffusion of illiberal narratives within free and open discursive spaces. This study reports the results of a field experiment designed to suppress the spread of pro-Russian narratives related to the war in Ukraine by employing a methodology that uses generative artificial intelligence to generate and refine counter-narratives. We first identify rhetorical techniques and writing styles that independently enhance three key outcomes of counter-narratives—namely persuasiveness, emotional engagement, and shareability. Building on these findings, we develop a pipeline that boosts the effectiveness of counter-narratives through a multi-stage agentic framework, in which multiple generative agents with distinct roles iteratively update prompts in order to maximize persuasiveness, emotional engagement, and shareability. In the field experiment reported here, pro-Russian narratives posted in English on X/Twitter are detected on the platform in a manner as close to real time as possible, and counter-narratives are generated by large language models using carefully refined prompts. Posts targeted for diffusion suppression are randomly assigned to either a control group or a treatment group; no intervention is applied to the control group, whereas the treatment group receives the generated counter-narratives in the form of mentions. The numbers of likes, reposts, and views are tracked for both groups, and the effectiveness of counter-narratives in suppressing diffusion is empirically assessed through comparison between the two conditions. The field experiment is currently ongoing and is scheduled to conclude on January 15. As digital authoritarian actors are increasingly making sophisticated use of generative artificial intelligence, those seeking to defend democracy must likewise employ generative AI to counter these developments, and this study provides one such methodological approach. At the same time, it is imperative that careful and sustained attention be devoted to the ethical dimensions of this line of research, which are also treated as an important object of analysis and discussion.