ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Artificial Intelligence Co-regulation? The role of standards in the EU AI Act

Cyber Politics
European Union
Governance
Regulation
Experimental Design
Technology
Policy-Making
Chris Marsden
Politics Discipline, School of Social Sciences, Monash University
Marta Cantero Gamito
University of Tartu
Chris Marsden
Politics Discipline, School of Social Sciences, Monash University

Abstract

The European Union's proposal for a Regulation on artificial intelligence (AI), also known as the ‘AI Act’, is sparking debate about the use of co-regulation in the field of AI. Co-regulation, which involves a shared responsibility between public and private actors to draft and enforce rules, has potential advantages such as flexibility and adaptability to rapidly-changing technology, as well as opportunities for expert input and industry participation. However, this regulatory technique also carries risks such as conflicts of interest, lack of transparency and accountability, lack of competence to assess human rights challenges, and the potential for under- or over-regulation. Standards play a crucial role in the regulation of complex digital technologies such as AI. They provide essential information rules, and their development is historically and institutionally determined. While self-regulatory standards such as Internet protocols exist, other standards are required by law, or their development is delegated to approved bodies. The AI Act is about crafting a horizontal private product liability regime, with a heritage in European consumer law. Therefore, as it currently stands, the draft rules suggest leaving the definition of the technical details for compliance, through liability audits and risk assessments, to standardization. This paper critically evaluates the use of standards and co-regulation in the AI Act and highlights potential risks such as conflicts of interest and lack of accountability. It offers a normative perspective on the potential effectiveness of co-regulation in regulating AI and examines the implications of using standards in the governance of a pervasive technology with important societal implications, including fundamental rights. This paper contributes to the ongoing discussion and research agenda regarding regulation of AI, and provides a critical analysis of the use of co-regulation and standards in the EU.