ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Innovation and Domination: Finding the Balance

Democracy
Political Theory
Social Justice
Business
Freedom
Technology
Jonne Maas
Delft University of Technology
Jonne Maas
Delft University of Technology

Abstract

It is well-known that technological innovations often entail unintended and unforeseen individual and societal harms. So, too, in the field of Artificial Intelligence (AI). While the consensus is that AI must be regulated to mitigate harm, the best approach for such regulation is still disputed. Some argue for self-regulation, where companies do everything possible to minimize potential harm. This approach is predominantly supported by the tech industry itself. Others claim that self-regulation is insufficient and call for other forms of regulation, such as independent auditing boards or democratic, participatory design of AI. This stance is often held by academics, civil society organizations, and, more generally, outsiders of the tech industry. Simply calling out in favour of one or the other will not be beneficial to the debate, so instead, we must identify and assess the underlying values of both groups. The call for self-regulation is ultimately rooted in one’s freedom to innovate: the possibility to start and progress one’s own company as long as no harm is done. This freedom to innovate is particularly prevalent in the United States, where, not to our surprise, we find Silicon Valley’s great demand for self-regulation. Calls for self-regulation, hence, have strong roots in a neo-liberal conception of freedom as non-interference along the lines of Mill’s harm principle. The call for external regulation, however, comes from a fear of being subjected to an uncontrolled or arbitrary power. Arbitrary power, or what is known as domination in neo-republican theory, puts the dominated agent into a vulnerable position. Even if the dominant agent treats the subordinate agent well, the subordinate can do nothing but hope that the dominant agent will continue to do so (or, indeed, develop ‘ethical’ AI). Self-regulation, then, does not mitigate this vulnerability as it does not address the power dichotomy between the dominator and the dominated. The debate on self-regulation thus implies a trade-off in one’s individual freedom to innovate versus societal fear of domination. A fruitful attempt to move the debate on self-regulation forward is, therefore, to search for those types of regulation that, on the one hand, relieve society’s vulnerable position and, on the other hand, allow private individuals and companies to maintain a sense of autonomy in their innovative practices.