ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Securing AI, Securing Europe? The EU AI Act’s Role in Addressing Extremist Threats

European Union
Extremism
Governance
Government
Christine Gajo
University of Vienna EIF
Sean McCafferty
Metropolitan University Prague

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

The EU AI Act represents a landmark effort to regulate artificial intelligence technologies through a risk-based framework. While the Act addresses a wide range of high-risk applications, its capacity to mitigate the threats posed by extremist actors through the use of generative AI remains limited. This article critically assesses the extent to which the EU AI Act responds to the emerging risks associated with the use of generative AI by extremist groups for propaganda, disinformation and recruitment within the EU. By analysing relevant provisions of the Act and its supplementary measure, the Code of Practice, against the relevant use cases of extremists in the online space, we evaluate the extent to which the EU’s legal framework helps mitigate and hold accountable the misuse of AI for extremist purposes. Extremism is perceived here in a broad sense, wherein AI use by far-right and jihadist groups will be compared in the empirical sections of this article. We will argue that, considering the state of the art of generative AI risk management, the AI Act and Code of Practice provide a sound risk mitigation framework for models designated as “systemic risk”. Nevertheless, under the current systematisation this only applies to the four largest models on the market, leaving their derivatives and all other smaller models essentially unregulated. This is likely to result in a concentration of extremist material input on such models, leading over time to an extremist and biased output. In the absence of regulation, legislators and citizens alike have no legal recourse to find companies liable for not adhering to EU-prescribed ethical norms for AI models. The EU’s lax approach to the regulation of smaller digital platforms has historically created gaps of governance that were opportunistically abused by extremists, providing a soft bed for radicalisation. We will argue that leaving smaller generative AI models entirely unregulated is deemed likely to encourage the same tendencies. In the interest of counter-extremist efforts, we advocate for symmetrical requirements upon generative AI systems, and greater alignment between AI governance and internal security objectives. Despite the EU’s recent shift towards relaxing AI and wider digital regulation, EU legislators would be wise to consider the potential long-term implications of leaving a widely accessible paradigm-shifting technology unregulated.