ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Automation and Targeted Realities: an Investigation of the Ordering and Created Realities of Automated Targeting Systems.

Cyber Politics
Institutions
War
Decision Making
State Power
Technology
Big Data
Keefer Denney-Turner
York University
Keefer Denney-Turner
York University

Abstract

Over the past year, journalists investigating the ongoing conflict between Israel and Hamas in the Gaza Strip have uncovered the use of numerous artificial intelligence (AI) applications that have aided the Israeli Defense Force (IDF) in selecting targets and choosing when to strike, the process of algorithmic targeting. This includes tools such as Habsora, which aids intelligence analysts in identifying buildings that allegedly are used by enemy forces, Lavender, which is able to identify potential enemy operatives, and Fire Factory, which analyzes historical data from previous strikes to calculate optimal timelines and required ordinances for strikes against enemy personnel. Although a number of these algorithms have been in use since 2021, the IDF and intelligence forces have increasingly relied on their outputs since the escalation of conflict following Hamas’ October 7, 2023 attack on Israel. With increased usage, the logic and knowledge output of, what Bo and Dorsey have labeled “AI-enabled decision support systems (AI-DSS)”, is ingrained in the operational framework of military and intelligence services. This begs the question, how will the institutionalization of technologies that are able to automate the process in which military targets are selected shift, order, and create realities in the localities of their usage? How will their implementation change how military decision-makers and intelligence analysts approach the issue of targeting and who is and is not a legitimate target? To answer these questions, this article will break down the boundaries between agency, decision-making, and subordination to analyze the ways in which power is given to and then utilized by algorithms in the space of military targeting. In this manner, this article will utilize a sociotechnical imaginary framework and a micropolitical understanding of technology. By combining these two approaches, this article will utilize the example of the Lavender algorithm to discuss the manners in which military and intelligence analysts’ use of this targeting mechanism creates a suspension of agency and decision-making. This will be a largely theoretical intervention, as there is insufficient detail available to analyze particulars of ethics and bias in both the upstream and downstream areas of the production of the algorithm. Instead, this article will break down institutional utilization to show how power is placed in the hands of Lavender as a creator of knowledge and rationality. As that power is enacted in an intelligence/military setting, the concept of the ‘legitimate target’ is framed as a technical output. In acting upon these productions, the use of targeting algorithms produces a localized reality that obscures all precursors of decision-making by crafting the role of the combatant as a definitive algorithmic output and bearing consequences for those Palestinians selected as targets.