ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Tracing Military AI in the EU: Insecurities, Ambitions, and an In-Between Position by Justinas Lingevicius

From the Standing Group on Knowledge Politics and Policies

The European Union's (EU) decision to exclude the military from the scope of its AI policy, as outlined in the AI Act, raises an important question: to what extent does this decision align with reality, particularly in the context of evolving defense-related programs and instruments, such as the European Defence Fund? This question builds on existing analyses, which reveal that dual-use technologies are already embedded in military-related practices within the EU but are still framed primarily as market-driven industrial policies rather than as security-focused initiatives. I argue that, given the dual-use nature of AI, the EU follows the same pattern to focus on the market and industry side of AI as civilian. Meanwhile the military side is being sidelined despite exercising military power in terms of AI. The main intrigue is – how the EU frames military AI within the related strategic discourse. 

I analyse this question in my new article ‘Transformation, insecurity, and uncontrolled automation: frames of military AI in the EU AI strategic discourse’ (Lingevicius, 2024) released in the Critical Military Studies journal. 

Frames of military AI

In the article, I identify three intertextual frames within the EU’s AI strategic discourse. These frames do not appear to be static or arranged in a hierarchical order. Instead, they reflect diverse positions and ongoing tensions within EU institutions. For instance, the European Parliament advocates for a more proactive approach to incorporating the military, whereas the European Commission maintains its stance of excluding the military from the policy scope.

The first frame suggests that military AI is perceived as transformative technology which requires the EU to adapt to this changing landscape. Transformation is closely attached to the international environment which is seen as competitive and challenging the EU to be competitive as well. Therefore, transformation is not solely about pursuing technological advancement but also about positioning the EU at the forefront, particularly in comparison to other major powers such as the United States and China. Lastly, this frame encompasses various calls for building military AI-enabled capacities, suggesting that transformation has also become synonymous with increased armament aimed at securing a competitive edge.

The second frame leans towards a different direction where in international dynamics there is about increased insecurity because of the arms race. In other words, it contains meanings which portray external developments related to military AI as threatening the EU. Again, international dynamics is interwoven with the perception of technology, which is considered as being unknown and uncontrolled while its developments and (mis)uses may raise concerns. Therefore, such a situation is unfavorable to the EU’s ambitions to be competitive because the race creates even more uncertainty. 

The third frame portrays military AI as uncontrolled automation, focusing less on the international environment and more on the nature of the technology itself. In this case, concerns are mainly articulated through anticipating reduced human involvement and challenged human agency mainly through autonomous weapons systems. Therefore, the international engagement receives different perspectives – to multilaterally engage and cooperate with other partners in order to ensure the ‘human in the loop’ principle. In this case, military AI is perceived as a source of insecurity, with the response rooted not in the pursuit of greater technological power but in normative principles aimed at mitigating the potential of automation.

Overall, these identified frames reveal different sources of insecurity leading to different EU responses (to build military capability, to navigate through the arms race, and/or to ensure control) internalizing the military as well.

The EU takes an in-between position

The frames also appear closely related to how the EU perceives and aims to define itself. Despite mentioned ambiguity of the EU to openly discuss military-related concerns, the empirical evidence suggests that the integration of military thinking into the AI framework is being actively pursued. Therefore, I argue that the EU’s position can be described as one of in-betweenness, as it navigates a state of liminality while transitioning across boundaries. Those boundaries are marked by different elements of the military power (embracing transformation and military capacity building), the normative power (establishing values-driven principles to control military AI uses) and market power (competition through production and ownership).

At the same time, the empirical evidence supports claims that the EU exploits the nature of AI as dual-use technology by putting the spotlight on one side (civil) but sidelining the other (military). Such reinforced duality enables the EU to keep room for manoeuvres and frame its self-image as it prefers, despite evolving military practices and noticeable contradictions. However, potentiality of dual-use technology to be civil and military also creates risks of manipulation, which involves challenging boundaries and accountability in the context where the expansion of the military power is dispersed across different policies. The case of the emerging EU AI policy demonstrates that the military then becomes implicated in civilian policies such as the single market, industry, R&D, and digitalization. Therefore, it remains important to challenge official positions to understand their implications and to foster critical debates that address existing inconsistencies.

Justinas Lingevicius is a Lecturer and PhD candidate at Vilnius University, Institute of International Relations and Political Science, Lithuania.

References:

Justinas Lingevicius (2024): Transformation, insecurity, and uncontrolled automation: frames of military AI in the EU AI strategic discourse, Critical Military Studieshttps://doi.org/10.1080/23337486.2024.2387890

This post was initially published on Europe of Knowledge blog. 

07 March 2025
Share this page