ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Democratizing Expertise: The Limits and Possibilities of Artificial Intelligence in Enhancing Democratic Decision-Making

Democracy
Political Theory
Knowledge
Decision Making
Technology
Petr Špecián
Charles University
Petr Špecián
Charles University

Abstract

In democratic decision-making, experts are best "kept on tap, not on top," to use Hélène Landemore’s memorable phrase. But how to achieve this outcome when the experts possess an epistemic advantage over laypeople? A layperson’s cognitive heuristics for choosing whom to trust among the putative experts with conflicting judgements are typically crude. Thus, it is all too easy to design a system that enables those in advisory roles to overstep and act as lobbyists or fails to utilize the existing expertise because it is not rendered sufficiently trustworthy. The existing solutions appear far from perfect. The aim of my talk is to explore the potential of emerging artificial intelligence technologies to solve this problem. Specifically, I will focus on large language models (LLMs) like ChatGPT. The use of LLMs raises intriguing questions, since the pros and cons of substituting human experts with AI expert systems both appear significant. Perhaps the most worrisome issue with LLMs in this context is their propensity to "hallucinate." Occasionally, they make completely imaginary claims, present them with an authoritative voice, and resist users' attempts at correction. Also, LLMs lack transparency, and nobody knows precisely how they arrive at their answers based on their extensive training set of human-generated text. This results in the problem of misalignment, where the systems are difficult to control. Finally, the most capable systems are owned by private corporations, and the high costs associated with their training lead to a concentration of market power. Implementing such tools to enhance the democratic process may inadvertently corrupt it through corporate influence. These issues cannot be taken lightly. However, I argue that they are not necessarily decisive. For one, it is not as if human experts never "hallucinate." They are also prone to confabulation, whether intentional or not (e.g., misremembering facts, or making mistakes in professional judgment). They also make pronouncements with an authoritative voice while occasionally hiding the depths of uncertainty under the thin layer of expert understanding. In short, the track record of human expertise is not spotless and its universal superiority to AI expert systems is by no means obvious. Secondly, the issues of alignment and corporate control work against each other to some extent. If the technology is generally difficult to control, it is also difficult for its corporate masters to make it do their bidding. The lapses in alignment can be seen as democratizing, as they allow common users to access the technology's full potential despite the wishes of its creators. If further development makes the systems easier to control, it is also more likely that the capability to shape them specifically to the needs of a democratic assembly will increase. In light of these considerations, I maintain that LLMs deserve serious attention as a potential solution to the outstanding issues in the relationship between democracy and expertise. If deployed properly, they can be of great positive value.