ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Digitization and AI in Mass Surveillance and Mind Manipulation

Political Psychology
Representation
Technology
Jobst Landgrebe
University at Buffalo
Jobst Landgrebe
University at Buffalo

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

Digitization and AI infrastructures are expanded at a rapid pace thanks to storage and processor miniaturizing and massive productivity gains in hardware production over the last 20 years. The vast majority of citizens of the Northern hemisphere are constantly online at least passively, so that there movements can be tracked via their phones. But they also actively engage with the web many hours a day, making it the dominating sphere of recordable behaviour and content consumption in the form of video, text, and audio. With the decline of a uniting framework of Christianity and nationalism since the end of WW2 due to secularisation, political indifference, resignation, or radicalization, strategies to maintain political stability have to be revised. While the post WW2 model was based on economic, social and political participation, Western societies are now failing to distribute wealth to the vast majority of the populations without passive income. Many of the latter are indebted during their entire professional life and struggle under the economic pressure of stagflation we are experiencing since 2020. We also see a decline in political and social participation. Therefore, political stability is increasingly achieved via repression. Taking into account in detail the technical capabilities of so-called Artificial Intelligence (AI) models, this paper shows how AI is now utilised for mass surveillance, a containment of utterances deemed as politically relevant, as well as for the manipulation of the masses. In the surveillance space, based on the digital infrastructure, AI algorithms can identify individuals in the public space and link them to their digital behaviour. Since this is by its nature available in digital form, it can be processed by algorithms to classify the intentions or political preferences of individuals taking into account their social status and context at the community and society level. Such algorithms are used by institutions tasked with reporting citizens to the authorities, such as the Trusted Flaggers institutionalised via the EU's DSA. Opinion misdemeanors or crimes of the citizens (depending on the way the DSA is realised and how it is contextualised by national legislation) can then be prosecuted and punished via various mechanisms from shadow banning, social medial account removal to bank account cancellation or imprisonment. What about mind manipulation? Mass media have always been a means of propaganda, but with AI, the contents displayed to and withheld from an individual can be tailored to maximise the effect of propaganda, which has to appeal to basic emotions in order to be effective. This is exemplified by Large Language Model based chat applications such as Gemini, Grok, or ChatGPT, which are employed to manipulate their users by identifying and mirroring content types aligned to the preferences of those who commission or influence the models' configuration and logic.