ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Contribution (2): Do Bots and Trolls Speak the Same Language? Actor-Based Analysis of Computational Propaganda in a Multilingual Setting

Europe (Central and Eastern)
Conflict
Campaign
Quantitative
Social Media
Communication
Comparative Perspective
Big Data
Aleksandra Urman
Universität Bern
Aleksandra Urman
Universität Bern
Mykola Makhortykh
Universität Bern

Abstract

The increasing use of computational propaganda has detrimental effects on the public sphere as it can sway citizens’ political views and undermine the core principles of the democratic system (Wooley & Howard, 2016). While many studies investigate the manipulative uses of platforms in different political contexts, existing scholarship has several limitations. The majority of studies look at isolated cases of computational propaganda and rarely put them into a larger context. In practice, however, propaganda campaigns can involve both automated and human actors targeting different language audiences with different political messages (Bradshaw & Howard, 2019). Similarly, there is relatively little research done on the use of computational propaganda in multilingual settings. However, messages in different languages can target specific population groups, which increases their manipulative potential and further fractures the public sphere. To address this gap, the current paper adopts a systematic actor-based approach to the analysis of computational propaganda and uses it to examine how different categories of actors manipulate the public opinion in a multilingual setting over the course of consecutive political campaigns. As a case study, it looks at computational activity in three languages—Ukrainian, Russian and English—during two political campaigns in Ukraine: the presidential elections (March-April 2019) and parliamentary elections (July 2019). The choice of the case study is attributed to three factors: first, as part of the ongoing Russian-Ukrainian conflict, Ukraine is a common target of propaganda and disinformation campaigns (Snegovaya, 2015; Sazonov et al., 2016). Second, there is evidence that domestic Ukrainian actors also start using computational propaganda to target their political opponents (Zhdanova & Orlova, 2017). Third, the multilingualism of the Ukrainian public sphere offers valuable insights into the use of computational propaganda in a multilingual setting. For implementing the study, the paper uses two sets of Twitter data collected in the course of the above mentioned electoral campaigns. Using a combination of automated approaches (e.g. Botometer) and human content-based coding for propaganda actor detection, it scrutinizes the presence of automated (i.e. bots) and human actors (i.e. trolls/elves) during the two campaigns and traces if their activity in relation to specific political candidates/parties evolved through time. We use automated language detection to separate English-language tweets from the rest; however, for Russian and Ukrainian languages existing algorithms do not work well, hence we designed a naive classification approach relying on the script differences. Subsequently, we analyze the corpora separately using structural topic modeling to identify differences within content produced by the bots and trolls/elves as well as the content in different languages. We find significant differences in the targets and strategies of computational propaganda disseminated in Russian and Ukrainian. We suggest that these differences have to do with the overlap between the linguistic and political divisions in Ukraine, where content in specific language is often associated with certain political perspectives.