ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

In person icon Moral Debates: AI Technologies and Democracy

Citizenship
Civil Society
Democratisation
Political Theory
Social Justice
Decision Making
Ethics
Normative Theory
P301
Tuğba Sevinç
Kadir Has University
seniye tilev
Kadir Has University
Hüseyin Kuyumcuoğlu
Kadir Has University

Abstract

This panel explores the normative concerns raised by emerging technologies such as Gen-AI and predictive algorithms and discusses how these technologies challenge democratic principles and political processes. The potential for emerging technologies like Gen-AI and predictive algorithms to produce and spread mis/disinformation, conceal discrimination, and facilitate manipulation is substantial. As a result, these technologies will likely exacerbate the erosion of democratic stability by influencing voting patterns, challenging established citizenship concepts, and eroding public trust in the impartiality of decision-making processes. Just as past technological innovations demanded adjustments to our political systems, the emergence of current technologies urgently requires us to adapt our philosophical frameworks to safeguard our democratic values, including the underlying moral principles that sustain them. We believe this necessitates a closer collaboration between moral and political theory and political science. The proposed papers utilize key political and moral concepts – human dignity, fairness, autonomy, domination, and civic virtue – to analyze the normative implications of these technologies, highlighting the crucial importance of cultivating greater moral awareness and developing morally justified approaches within a renewed political theory. To this end, the first paper aims to demonstrate the need for devising the deontological concept of human dignity to resolve the conflict between substantive unfairness (or discrimination) and procedural unfairness stemming from inaccurate results in predictive algorithms. As policymakers increasingly rely on predictive algorithms for crucial public policy decisions, the demand for a clear regulative moral framework becomes paramount to ensure public trust and democratic stability. A second paper aims to discuss and question the potential of AI tools in the process of the democratization of knowledge. AI systems have the potential to analyze vast datasets, revealing patterns and insights valuable for political decision-making. However, a significant concern arises from the inherent biases within the data used to train these AI models. These biases, often rooted in existing societal inequalities and cultural prejudices, are inadvertently replicated and even amplified by AI algorithms. This can lead to discriminatory outcomes, further marginalizing already disadvantaged groups and undermining the promise of AI to democratize knowledge and provide fair opportunities to all citizens in every aspect of social decision-making procedures. Thus, it becomes crucial to unveil the illusion of impartiality once attributed to computational thinking mechanisms and trigger a new philosophical discussion of key political concepts such as fairness, pluralism, and transparency on moral grounds. The third paper focuses on the proliferation of easily produced and distributed deepfake videos, arguing that such content undermines people's confidence in certain types of information about the world (the epistemic harm), essential for maintaining trust and social cohesion within society. However, the underregulated production and dissemination of deepfake content not only undermines public trust and harms democratic processes but also violates human dignity by thoroughly undermining people’s right to be informed. What is needed is a moral framework that considers the threat to democratic processes together with (and linked to) the violation of human dignity. The fourth paper focuses on the ease of reaching AI technology for deceitful purposes and points out the importance of supporting the cultivation of vigilant and responsive citizens. Like free republics need vigilant citizens who are alert against the danger of domination and corruption, AI societies require AI-vigilant citizens who are aware of the risks of AI misuse (especially in socio-political contexts) and have acquired/developed the necessary character traits and dispositions (and not only the formal skills) that are crucial to keeping their democracies and political culture intact. This points out reformulating civic virtues (and civic education) in emerging AI technology to keep our democracies and liberal political cultures intact.

Title Details
AI Vigilance: A Civic Virtue Perspective View Paper Details
AI: A Failed Potential for Better Democracies? View Paper Details
AI and Human Dignity View Paper Details
Deepfakes, Public Trust, and Human Dignity View Paper Details