Deepfake Technology: Misogyny and the Space for Progressive Responses Between Criminalisation and Creative Resistance
Gender
Social Justice
Social Movements
Feminism
Social Media
Political Activism
Power
Technology
To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.
Abstract
This paper conceptualises the political uses of deepfake technology as misogynistic repressive tools and the political responses to them, from institutionalised acts to radical progressive possibilities centered on alternative uses to combat sexism.
Deepfakes is a portmanteau of deep learning and fake, coined in 2017 by a Reddit user who created a forum to share altered pornographic videos. Since then, multiple face-swapping apps and programmes have been developed, creating altered images, videos, and audio, with increasing accessibility and ease of use for the general public.
Deepfake technology has various uses; however, research suggests that harmful sexualised deepfakes are the most prolific, making up 98% of deepfake videos found online, disproportionately targeting women (Home Security Heroes, 2023). Within this context, harmful, sexualised AI-generated images are used against female politicians and activists in efforts to shame and silence them (Ritchie, 2024). Furthermore, due to the increased ease of use of deepfake technology, they have become part of toxic techno-cultures that use women’s images to harass and humiliate them, making deepfake technology a threat to all women and girls (Massanari, 2017; van den Nigel, 2020).
In mapping responses to this threat, we argue that current mainstream political solutions, supported by both left-wing and right-wing parties across the globe, centre on implementing new legislation and regulating the creation and sharing of such images. Prominent examples of this include the UK’s Online Safety Act 2023 and the US’s TAKE IT DOWN Act 2025. Existing academic and journalistic commentary has largely focused on the threat that deepfakes may pose as disinformation and the impact they may have on democracy (Gosse and Burkell, 2020; Maddocks, 2020). In mapping the uses of deepfakes, however, we question this focus. Political deepfakes, at least, are not always used to spread disinformation but to connect with followers, create comedy, memes, or satire of the opposition. For instance, South Park creators used their own AI company, Deep Voodoo, to create a ‘naked trump parody PSA’. Other progressive uses of the technology include the documentary ‘Welcome to Chechnya: The Gay Purge’, which used a ‘new digital ‘face-double’ technique to ensure that ‘the identities of those most at risk are protected’ (BBC).
In mapping these developments, we can see both a prevailing effort to render the creation and sharing of deepfakes illegal through regulation and nascent attempts to reclaim the technology and put it to progressive uses. Arguably, both of these potential progressive responses to deepfakes carry significant risks, with regulatory efforts potentially leading to a reinforcement of oppressive carceral systems whilst falling short of curbing the misogyny and patriarchy. On the other hand, whilst progressive efforts to utilise deepfakes may normalise this technology and not acknowledge the overwhelming misogynistic usages as a potentially intrinsic feature of the structures from which these deepfakes emerge.