ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Political behavior in LLM agents: experimental insights into governance, global power asymmetries, security risks, and the dynamics of knowledge construction

Cyber Politics
Internet
Lab Experiments
Technology
Big Data
Giorgio Volta
The School of Advanced Defence Studies CASD
Giacomo Longo
The School of Advanced Defence Studies CASD
Giorgio Volta
The School of Advanced Defence Studies CASD

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

Large Language Models (LLMs) are not neutral instruments but actors that follow global hierarchies of power, technological sovereignty, and the conditions of international security. This article examines how the development, control, and governance of these systems intersect with political authority, comparing the strategies of the United States, China, the European Union, and Russia. Their approaches to LLM design and deployment reveal distinct ambitions in terms of influence, normative projection, and informational control. To assess the security implications of this technological shift, the article introduces an integrated framework that considers risks emerging during training, during use, and in the political environment into which these systems are embedded. The first category includes data poisoning, possible backdoor insertion, and the broader contamination of the information ecosystem, including forms of "LLM grooming". The second concerns prompt manipulation, disinformation production, and operational misuse in cyber contexts. The third captures the political consequences of dual-use capabilities, proliferation dynamics, and escalation pathways. Particular attention is given to autonomous research agents with real-time web access, which are often assumed to provide grounding, reliability, and up-to-date analysis. The article aims at analyzing whether agentic research methods introduce new forms of opacity and risk. Early generations of LLM use relied on whatever knowledge happened to be encoded in the model, a practice that produced stale information and hallucinated arguments. Letting the model consult selected external documents improved grounding, but its output still relies on the quality of those materials and on what it managed to retrieve. Agentic systems that can autonomousely browse the web appear to offer a remedy by accessing fresh sources and following multistage reasoning plans. Yet such systems remain affected by stochastic sampling, inherited model biases, and problems introduced by search engines themselves. To demonstrate how these risks manifest, the article performs an extensive empirical evaluation in which multiple LLMs of different origins and scales are tasked with producing political analyses across a range of topics. Each model is run repeatedly on the same task, and the resulting reports are examined for the coherence of their claims, the degree to which models converge or diverge in their interpretations, and the breadth of perspectives they generate. The findings show that deep research agents produce narratives related to the positioning of their creators. Understanding how autonomous LLM agents construct knowledge, reflect internal biases, and associated risks is essential for governing the technological transformations now reshaping global politics.