ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

The Limits of Authoritative Knowledge: Artificial Intelligence and the Politics of Future Objects

European Union
Governance
Institutions
Knowledge
Global
International
Hendrik Schopmans
WZB Berlin Social Science Center
Hendrik Schopmans
WZB Berlin Social Science Center

Abstract

The recent emergence of object-centered theories in International Relations has offered a new perspective on the processes in which knowledge on international problems, such as the climate or piracy, is constructed. In short, this view holds that the problems international organizations (IOs) respond to are not simply ‘out there’. Instead, experts actively construct phenomena as “objects” of governance in competitive processes, with each expert group striving to have its representation of an object recognized as authoritative. Whichever knowledge claim becomes authoritative, then, shapes how IOs perceive, and tackle, a given problem. In this paper, I aim to expand object-centered theories by challenging the implicit notion that expert competition over an object takes place, and concludes, before IOs act on the object in question. I argue that some objects of expertise—what I here call future objects—are less likely to be conclusively “fixed” and more likely to be epistemically contested as they become politically salient. In contrast to objects that are grounded in relatively well-observable phenomena (e.g. the climate), future objects, such as emerging technologies, are characterized by uncertainty and a high degree of interpretative flexibility. This, in turn, increases the degree of epistemic contestation, as different groups of experts create mutually exclusive descriptions of what the object is (and what it is not), why it is problematic, and what policy should address it. As expert disagreements pertain to the very nature of the object itself, experts lack a common language to settle their conflicts, precluding the emergence of an authoritative representation. I expect that in the absence of a coherent scientific view, IOs are less likely to act decisively on an emerging issue. To illustrate this argument, the paper discusses the case of artificial intelligence (AI). After illustrating how the decade-long competition among scientists over determining the “true” nature of AI has prevented the emergence of consensual knowledge, the paper turns to the European Commission’s efforts to develop a regulatory framework for AI. Here, I draw on expert interviews to explore how the presence of competing forms of expertise has shaped the European Commission’s response to a governance object that is continually in-the-making.