ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Risky Applications – A Genealogical Analysis of the Concept of Risk in the Proposed European AI Act

European Politics
European Union
Regulation
Technology
Alexis Galan
Universität Bonn
Rebecca Schmidt
Alexis Galan
Universität Bonn

Abstract

Automated decision-making processes and artificial intelligence (AI) are becoming increasingly prevalent across all sectors of society. However, such applications are also accompanied by significant risks, which have prompted regulatory responses. One of the most significant regulatory efforts in this domain is the so-called AI Act proposed by the European Commission in 2021. The Act itself adopts a risk-based approach, where the suggested regulatory framework foresees three categories of AI practices structured along their envisaged degree of risk: First, most risky practices are prohibited. Those include procedures such as social scoring, certain detrimental nudging tactics , or specific types of remote biometrical identification by law enforcement. The next category covers so-called high-risk AI. This includes both, the use of AI in types of products which EU law already requires to undergo third-party conformity assessment; as well as certain areas of application which are deemed high risk, such as biometric identification, critical infrastructure, education, etc. Such high-risk AI applications then need to undergo themselves conformity assessment procedures, similarly to the one for product safety as mentioned above. Lastly, all applications that do not fall within the categories just described are deemed limited or minimal risk. There are no specific requirements linked to their development or application other than transparency obligations in some cases. Despite being the first comprehensive attempt to regulate AI technology, the proposed AI Act and public regulation in the area more generally faces many challenges. One major constraint concerns the evolving nature of the technologies involved, where sufficient understanding is often limited to developers and professional users. As a result, for the regulation of risks created by these complex technologies the work of those (usually private) actors configuring the technical details is highly relevant. Technical standards produced by private actors are therefore crucial as they specify the parameters within which new technologies operate. Our paper analyses the underlying theoretical understanding of risk in the regulatory context just outlined. Two aspects are of central importance for this analysis: 1. The role of risk as a factor for regulation regarding a technology that is under constant development with unknown outcomes; and 2. Risk as defined not only by public regulators but also by private actors involved in the development and application of risk creating activities. In our analysis we adopt a genealogical approach to provide a critical perspective of risk as adopted in the AI Act. Notions such as risk are not coherent or have a transparent meaning. Hence, it becomes crucial to understand how risk appears as a form of knowledge, style of thought, and technique for monitoring and calculating dynamic phenomena. Genealogy also asks how terms such as resilience or design have come to have the significance and the effects they do. Especially in the AI Act we observe a certain continuum from existing regulatory frameworks (EU product safety regulation).