ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Generative Emotion AI (GE-AI): Transformative Potentials and Risks in Modern Governance


Abstract

Generative Emotion Artificial Intelligence (GE-AI) combines generative-AI, which produces novel content from learned data patterns, with emotion-AI designed to recognize human emotions and generate emotionally responsive content. By analyzing facial expressions, voice intonations, and physiological signals, GE-AI systems interpret emotional states and produce tailored responses, including text, audio, and video content. While GE-AI has the potential to significantly enhance human-computer interactions, it also poses substantial risks when misused for surveillance, manipulation, and control. This article argues that such misuse constitutes a form of digital authoritarianism that threatens democratic processes and civil liberties. The article employs a qualitative methodology to examine the dual capability of GE-AI, highlighting both its potential and dangers. Through case study analysis, it explores examples of deepfake video generators such as Face2Face, which allows real-time facial re-enactment, creating videos where facial expressions are manipulated to match those of a source actor. Enhanced by emotion AI, these deepfakes become more lifelike and emotionally accurate, increasing their believability and impact in spreading disinformation. Similarly, emotion-driven chatbots like Replika and emotion-responsive content generators like GPT-4 are analyzed for their ability to influence users through emotionally charged dialogues and content, which can be exploited in political or marketing campaigns. Incorporating a risk assessment component, the article evaluates how integrating emotion recognition capabilities into surveillance systems allows authorities to monitor emotions in real-time, facilitating deeper psychological control. Drawing from real-world scenarios, it examines how these insights might be employed to identify dissenters, monitor public sentiment, and pre-emptively suppress opposition. These risks extend beyond authoritarian regimes to democratic societies, where reactionary and technocratic forces are using GE-AI to influence elections and manipulate public opinion. For instance, GE-AI can craft emotionally charged disinformation campaigns, using deepfake videos and synthetic news articles to sway public sentiment and erode trust in institutions. Additionally, the technology can track and manipulate the emotional climate on social media, identifying moments of vulnerability to disseminate inflammatory content, creating an atmosphere of anxiety and uncertainty. To mitigate these risks, the article proposes a framework for regulating deepfake technologies, transparency requirements for AI-generated content, and guidelines to ensure the ethical use of emotion AI in surveillance and public communication. By addressing the risks associated with GE-AI, we can safeguard democratic principles and protect freedoms in the digital age.