ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

A Report on the Use of Facial Recognition and Object Tracking Software Using Artificial Intelligence in Paris: The Case of the 2024 Olympics & Illegal Use of Facial Recognition Software by French Police

Governance
Human Rights
Regulation
Security
Technology
Hunter Vaughan
King's College London
Hunter Vaughan
King's College London

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

This paper examines the deployment of AI-powered surveillance technologies in France through two contrasting case studies that illuminate fundamental tensions in algorithmic governance: the illegal use of facial recognition software by French police forces and the legally sanctioned implementation of object tracking systems during the 2024 Paris Olympics. By analyzing these cases comparatively, the paper explores how techno-solutionist approaches to security and public order navigate—or violate—legal frameworks, accountability mechanisms, and democratic oversight. The first case study investigates the unauthorized deployment of BriefCam's facial recognition software by French police forces between 2015 and 2023, as revealed by investigative journalists. Despite explicit prohibitions under EU GDPR regulations and French data protection law, multiple police services including those in Paris and Marseille employed active facial recognition capabilities. This case exemplifies governance crisis through opacity: national oversight bodies (CNIL) claimed ignorance of the technology's use, while the Ministry of Interior allegedly assisted in concealing its deployment. The case raises critical questions about state capacity to self-regulate AI adoption and the effectiveness of existing legal frameworks when confronted with technological imperatives within security institutions. The second case examines France's implementation of "Loi JO 2024," which authorized algorithmic video surveillance to detect "predetermined events" threatening public order during the 2024 Olympics. While legally sanctioned and publicly communicated, this deployment—extended until March 2025—demonstrates how temporary security measures risk becoming permanent surveillance infrastructure. The legislation explicitly prohibited facial recognition yet enabled extensive behavioral monitoring through object tracking and crowd analysis algorithms, revealing how legal frameworks attempt to balance security imperatives with privacy protections. Theoretically, the paper engages with literature on surveillance capitalism, algorithmic governmentality, and the paradoxes of techno-solutionism in public administration. It demonstrates how AI surveillance operates simultaneously as administrative tool and political instrument, reshaping state-citizen relationships through automated decision-making processes that minimize human oversight. The comparative analysis reveals that legal authorization does not resolve fundamental accountability deficits: both cases contribute to what civil society organizations characterize as a "dystopian" expansion of surveillance infrastructure with inadequate democratic control. Methodologically, the paper draws on policy document analysis, investigative journalism sources, and regulatory framework examination. It situates French developments within broader EU regulatory debates, particularly the ongoing AI Act negotiations and tensions between security exceptions and human rights protections. The paper contributes to understanding algorithmic governance crises by demonstrating that the critical challenge is not simply illegal versus legal deployment, but rather the structural opacity, limited accountability, and weak enforcement mechanisms that characterize both scenarios. Whether implemented covertly or through emergency legislation, AI surveillance systems reshape governance in ways that elude democratic scrutiny and concentrate power within security institutions. The analysis concludes that effective governance of AI surveillance requires not merely regulatory frameworks, but robust enforcement mechanisms, mandatory transparency standards, independent oversight bodies with technical capacity, and meaningful restrictions on "emergency" exceptions that normalize expansive surveillance. These findings speak directly to contemporary debates about whether techno-solutionism can enhance state capacity without fundamentally compromising democratic accountability.