Install this application on your home screen for quick and easy access when you’re on the go.
Just tap then “Add to Home Screen”
Install this application on your home screen for quick and easy access when you’re on the go.
Just tap then “Add to Home Screen”
From the Standing Group on Knowledge Politics and Policies.
The release of ChatGPT in late 2022 captured the public imagination. It provided a tool everyone can play around, making Artificial Intelligence (AI) accessible to many. It also raised a lot of urgent questions, including about protecting copyrights, fighting disinformation, and avoiding discrimination. These and other questions were dealt with in several governance and policy initiatives, developed in the context of a new hype characterized by high positive and negative expectations surrounding ChatGPT. In my recent article ‘Governance fix? Power and politics in controversies about governing generative AI’ (Ulnicane 2024), I examine emerging governance of generative AI with a particular focus on activities of G7, Organization for Economic Co-operation and Development (OECD), and the AI Safety Summit.
To examine the key ideas about the generative AI governance, I draw on the Responsible Innovation approach, which already for more than ten years has been widely used in technology governance research and practice. While the Responsible Innovation approach is broad and fluid, I in particular focus on the three features. First, it focusses on collective stewardship of technology towards socially beneficial goals and goes beyond individual responsibility of technology developers. Second, the Responsible Innovation approach emphasizes the importance of inclusion of the public in a two-way consultation, assigning an active role to society in co-shaping technology. Third, its approach to technology governance goes beyond mere risk management to encompass the purpose and direction of innovation.
The initial international initiatives for governance of generative AI fall short of the premises of inclusive and purposeful governance of technology, as suggested by the Responsible Innovation approach. They predominantly focus on risks, framing the public debate about generative AI largely in terms of existential vs immediate risks. Concerns about the risk management dominate over the considerations of purpose of this technology. Generative AI can be characterized as being largely a supply-driven technology push with unclear public demand. Its early governance initiatives pay relatively little attention to its contribution to tackling major societal challenges of our time. Moreover, they assign a rather passive role for society that needs to adapt and contribute to risk mitigation rather than actively co-shape the technology. This creates a kind of paradox of generative AI governance, when a technology that is used widely by society is at the same time governed narrowly by technical experts.
I coin the term ‘governance fix’ to highlight instrumental and technocratic approach to governance in generative AI policy. To do that, I build on the concept of ‘technological fix’ that presents technology as a quick and cheap solution to complex and uncertain social problems. According to the ‘technological fix’ concept, technical solutions are seen as superior to political, economic, educational and other social science approaches to tackling problems. Accordingly, engineers are best placed to solve social problems and there is no need for public participation.
While technological fix remains a highly popular approach, it has received considerable criticism for being incomplete, ineffective, mechanical, not getting to the heart of the problem, and creating new problems as it solves the old ones. Prioritization of technical solutions allows technology companies to promote their vested interests, while letting policymakers avoid searching for more complex approaches to addressing problems that require immediate attention.
I suggest that in the case of generative AI, we can observe a ‘governance fix’ approach that, similarly to ‘technological fix’, considers governance as a technocratic endeavour that can be quickly developed and implemented by experts without public participation and deliberation regarding goals, direction and purpose of generative AI. As an alternative to this narrow and technocratic approach, I suggest participatory and inclusive governance that focuses on co-shaping technology towards socially beneficial goals.
AI, including generative AI, continues to pose major political questions. More research on politics, power and policy of AI is in the works. If you are interested to collaborate, please get in touch. Those attending the ECPR General Conference in Dublin, might be interested to attend the featured roundtable ‘Politics, Political Science, and Artificial Intelligence’ on Monday, 12 August 11:15-13:00 and a panel ‘Challenging power in Artificial Intelligence politics and policies’ on Wednesday, 14 August 16:15-18:00. Hope to see many of you there!
Ulnicane, I. (2024) Governance fix? Power and politics in controversies about governing generative AI, Policy and Society, puae022, https://doi.org/10.1093/polsoc/puae022
This post was initially published on Europe of Knowledge blog.