ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Unveiling Coordinated Inauthentic Behavior: Insights from the Vera AI Alerts System on Visual Manipulation in Political Content

Campaign
Social Media
Agenda-Setting
Mixed Methods
Political Engagement
Influence
Anwesha Chakraborty
Università degli Studi di Urbino
Anwesha Chakraborty
Università degli Studi di Urbino
Fabio Giglietto
Università degli Studi di Urbino
Giada Marino
Università degli Studi di Urbino

Abstract

The rise of inauthentic and coordinated online behavior of sharing links and posts on social media to push specific political agendas has garnered significant academic attention in the last few years. Social media platforms serve as complex arenas of political communication where authentic expression of collective action against authoritarian tendencies coexists with strategic manipulation of platform users to influence their political opinion and choices. Information operations, particularly during crises, breaking news, and elections, often exploit coordinated networks of social media actors to spread deceptive content. (Giglietto et al 2020; Giglietto et al 2023) Research on coordinated behavior detection today also needs to pay attention to visual content, particularly short-form videos, which can be rapidly generated and manipulated through generative AI technologies. Visual formats pose unique difficulties for detecting coordination, as methods for comparing and computing image similarity lag behind text-based analysis. This paper employs the Vera AI Alerts system, which is being developed as part of a large European project, and which draws inspiration from previous works of Giglietto and colleagues (2020; 2023) to detect nefarious coordinated link sharing accounts. The primary data is derived from CrowdTangle API - gathered from October 2023 to August 2024 - which detected 7,068 coordinated posts, 10,681 coordinated links, and 2,126 newly identified accounts. The alert produced three specific examples of coordinated networks: exploited large groups (sexual content), casino engagement bait (financial content) and Putin fan groups (political content). Focusing on the last example, the paper applies an extended visual framing model inspired by Chakraborty and Mattoni (2023) - which studied political posts on Facebook by grassroots collective actors - to analyze the visual framing of posts generated by coordinated accounts. By adding new frames to this model, the study offers insights into the visual manipulation techniques employed in the dissemination of politically charged, inauthentic content on social media. These findings contribute to a deeper understanding of the intersections between (in)authenticity, coordinated behavior, and visual framing in the evolving landscape of online information warfare.