ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Technology neutrality as a way to future-proof regulation: the case of artificial intelligence (AI)

European Union
Governance
Policy Analysis
Regulation
Internet
Ethics
Technology
Big Data
Atte Ojanen
University of Turku
Atte Ojanen
University of Turku

Abstract

Technology neutrality has established itself as one of the guiding principles of innovation and technology governance, especially within the European Union. The principle of technology neutrality, while subject to multiple interpretations, states that regulation should not favor or discriminate against any particular technology, but rather focus on the functions or outcomes of the use of technology (Craig 2016; Greenberg 2015). Multiple rationales for the principle have been given, such as promoting competition and innovation, but this article focuses specifically on technology neutrality as a way to future-proof governance and regulation. While rarely studied explicitly, technology neutrality is generally expected to enable future-proof regulation by allowing legislation to adapt to the changes in technology and its impacts over time, rather than being tied to specific, possibly obsolete technologies (Koops et al., 2006; Ohm, 2010). Yet, technology neutrality’s potential for future-proof regulation has not been analyzed in-depth in emerging contexts, such as AI (Harasta, 2018). To address this gap in research, the article conducts a qualitative analysis of the implications of technology neutrality for future-proof regulation in the context of artificial intelligence, specifically within the European Union’s Artificial Intelligence Act (AI Act). To do this, the paper critically contrasts the regulation of AI with telecommunications, where the principle of technology neutrality first emerged (Puhakainen & Väyrynen, 2021). In particular, the article examines the two main points that usually affect future-proof regulation, namely the scope or openness of the regulatory framework and its capacity for risk anticipation (Divissenko 2023), which directly translate into the definition of AI and the risk-based approach within the AI Act. Based on this, I analyze to what extent technology neutrality can be expected to enhance the future-proofness of EU's approach to regulating AI in terms of the legislation’s capacity to anticipate and address societal risks of AI. The findings indicate that while technology neutrality can be beneficial in anticipating the societal risks posed by AI systems, it may also obscure the political choices and agency embedded in technology deployment and governance. In other words, regulation also affects the need and incentives for emerging technologies through push and pull mechanisms. Therefore, if path dependencies and lock-in effects are not sufficiently considered, technology neutrality may also hinder a more responsible and anticipatory approach to AI regulation. The article concludes by suggesting some ways to rethink and revise the principle of technology neutrality to better align it with future-proof AI governance.