ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Three worlds of risk regulation: the cultural political economy of governing artificial intelligence in Europe

European Union
Political Economy
Regulation
Technology
Regine Paul
Universitetet i Bergen
Regine Paul
Universitetet i Bergen

Abstract

The European Union is at the forefront of regulating artificial intelligence (AI), seeking to set the pace for global regulation. Its proposed AI Act suggests the first coercive rules for AI system providers and users, including outright bans of some applications and comparatively hefty penalties for non-compliance. Core to its regulatory strategy is a risk-based approach with proportional interventions that escalate with risk levels. While many AI regulation scholars and practitioners interpret risk-based regulation as rational problem-solving, regulation and governance studies have instead suggested a strong link to legitimacy and reputation management. This article investigates how and why the EU seeks legitimacy and reputation through risk-based AI regulation. Drawing on a cultural political economy framework, combined with a critical-interpretivist analysis of primary documents and semi-structured expert interviews, I suggest that the risk heuristic enables the EU to carve out three much diverse worlds of AI risk regulation: the innovation-enabling world of low-risk AI systems, the risk-mitigating world of high-risk AI systems, and the strict watchdog world of unacceptable AI applications. This risk-based regulatory differentiation allows the EU not only to commit itself to competing policy goals and to mobilize different legitimatory logics at once; it is also a crucial branding device in Brussels’ efforts to compete globally on “trustworthy” AI.