ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

AI Governance and the Institutional Division of Labor

Democracy
Elites
Political Theory
Business
Ethics
Normative Theory
Technology
Ted Lechterman
IE School of Politics, Economics & Global Affairs
Ted Lechterman
IE School of Politics, Economics & Global Affairs

Abstract

Theories of the division of moral labor seek to provide general guidelines regarding how duties of justice should be shared among individuals, associations, and the state, and how power should be distributed across branches of government and between sectors of society (sometimes referred to as the institutional division of labor). Canonical work on these questions was completed long before the emergence of artificial intelligence as an ubiquitous technology. This paper claims that the division of labor needs, at best, a major upgrade, and at worst, a new operating system, to address the governance challenges of AI. Noting how pervasively AI is becoming intertwined with economic, social, and political practices, distinguished voices have argued that principles of political morality should apply directly to AI models or to those who train, develop, or operate them. According to this perspective, foundational AI models must themselves promote principles of distributive equality and/or directly incorporate popular input into their training processes. This perspective stands in tension with traditional thinking in political philosophy, which holds that principles of political morality apply in the first instance to the basic structure of society, understood as the way that major social and political institutions combine to shape the rights and opportunities of their subjects. According to this way of thinking, principles of justice and democracy do not apply directly to the conduct of individuals or firms or to specific technologies but rather to the set of background rules that shape social and economic activity. For this view, technology per se cannot be part of the basic structure and only becomes subject to principles of political morality indirectly, insofar as it is integrated with or regulated by major institutions. AI and its creators are not required to promote principles of justice or seek democratic input; attempts to self-regulate in these ways may even be an illegitimate privatization of public responsibilities. But without broadening our understanding of the basic structure to include AI, the counterargument goes, how else can we respond to the claims of justice and accountability that AI triggers? Resolving this question at the philosophical level is important in part because of the explosion of competing proposals regarding AI governance and value alignment. While these proposals often converge on certain norms on the surface (e.g., fairness, explainability, accountability), they indicate significant disagreement on how these norms should be interpreted, ranked, and operationalized. A general theory of how responsibility for, and authority over, AI should be distributed would help to guide choices about which, if any, of these approaches to enact, either alone or in combination. It would also provide essential clarity to questions regarding the appropriate role of different agents in the AI governance milieu, including governments, standard-setting bodies, boards of directors, managers, and AI researchers.