ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Regulating AI-Generated Misinformation: A Millian Approach

Cyber Politics
Political Theory
Freedom
Ethics
Normative Theory
William Chan
University of Cambridge
William Chan
University of Cambridge

Abstract

Generative AI (GAI) produces texts, images, videos, audio and so on, by learning the patterns of its training data and generating new data on that basis. In recent years, however, generative AI has led to the flood of misinformation. It is often used to create deceptive content about, for instance, elections, wars, natural disasters, policies and personal lives of politicians. There are two important features of AI-generated misinformation. First, misinformation produced by GAI is highly realistic and personalised, making it a lot more deceptive. It places much higher epistemic burdens on individuals to verify its accuracy. Second, GAI companies share some prima facie responsibility (legal and moral) for creating and maintaining AI tools leading to the tsunami of misinformation. Not only do existing regulations applicable to AI hint on the responsibility of tech companies to build AI models that avert negative social outcomes, but tech companies should also be held morally accountable to a significant extent as they have the best capacity to correct those outcomes. They commit a moral error when failing to exercise that capacity. A crucial question facing us is whether AI-generated misinformation, if at all, should be regulated; if so, how. This article aims to develop an argument about that question through the political philosophy of JS Mill. I focus on three key components of his idea: Principle of Utility: Actions are right in proportion, as they tend to promote happiness; and vice versa. Human Perfectibility: Humans are progressive beings who have a higher-order interest in possessing and developing our deliberative capacities. The exercise of those capacities, moreover, is a more important element of human happiness. Free Speech and the Harm Principle: The only ground for limiting a competent adult’s freedom is to prevent harm (i.e. the invasion of someone’s rights) to others. Providing individuals with free speech within the terrain of the harm principle, moreover, is what it takes to realise the nature of humans as progressive beings over the long run. Considering the features of AI-generated misinformation in light of Mill’s political philosophy, I argue that: (1) The making of GAI policies should depend entirely on whether they serve human happiness, with the exercise of deliberative capacities being its higher-order ingredient. (2 AI-generated information should not be censored unless they invade people’s rights. In cases where individuals’ rights are clearly invaded by AI-generated information, however, censorship is legitimate.