ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Resisting algorithmic content moderation

Naomi Appelman
University of Amsterdam
Naomi Appelman
University of Amsterdam

Abstract

This paper contributes to the theoretical and policy debate on algorithmic content moderation by foregrounding people’s strategies of resistance to these systems, using Tully’s public philosophy and Gangadharan and Niklas’ frame of decentring technology. Focussing on how marginalised groups resist content moderation harms will garner crucial insight in how these algorithmic systems of governance operate and where change is most urgently needed. In doing so, we propose a shift away from the singular framing the policy debate of such conflictual practices as online safety risks. The way in which platforms moderate, often with the stated aim to create safe online spaces, can instead function to harm through, for example, over- and under-removal of content or shadowbanning. Moreover, these harms are not equally distributed and disproportionate harm marginalised communities (Haimson et al., 2021; Marshall, 2021; Smith et al., 2021). Crucially, these disparate content moderation practices are technologically mediated through a range of algorithmic systems (Gorwa et al., 2020). The types of system as well as implementation affect the entire sociotechnical practice of content moderation and introduce their own logics and harms (Balayn & Gürses, 2021; Binns et al., 2017; Dias Oliva et al., 2021; Noble, 2018). The resulting automatic bans, opaque processes, or impenetrable platform decision-making processes spur affected people to develop strategies to deal with ‘the algorithm’. Extensive empirical research discusses strategies such as subversion, adaptation, experimentation or refusal (Duffy & Meisner, 2022; Ganesh & Moss, 2022; Vitak et al., 2017). In response, platforms and policy alike conceptualise these strategies of resistance themselves as the threat to online safety. Tully’s agonism centres what he calls ‘practices of freedom’ in the analysis of oppressive systems of governance (Tully, 1999, 2008; Tully & Livingston, 2022). Focussing on such practices of freedom means grounding our understanding of how content moderation functions to harm in the strategies people have developed to contest them. Moreover, this allows us to decentre the technologies or the policies and, rather, look at the systemic conditions that produce these injustices (Gangadharan & Niklas, 2019). Through an analysis of concrete practices of freedom, the paper offers the following insight into the algorithmic governance of online speech. First, certain practices of freedom potentially threaten a platform’s brand safety. The (lack of) responsiveness in different cases shows how these companies are embedded in broader social systems of oppression and act according to their political economic interests. Second, the algorithmic governance of online speech means the sites of contestation are not confined to the norms or their enforcement. Crucial decisions are made in the design, development, and implementation of these algorithmic systems (e.g., risk thresholds, categorizations, parameters etc.). Finally, conflictual behaviour, ranging from protest to forms of subversion, can form productive and emancipatory practices, and should not be solely framed as threats to online safety. Grounding the debate in practices of freedom will make these nuances visible and offer an agenda for change.