ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Debiasing Training Reduces Confirmation Bias in National Risk Analysts

Governance
Security
Knowledge
Bas Heerma van Voss
Radboud Universiteit Nijmegen
Bas Heerma van Voss
Radboud Universiteit Nijmegen

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

How reliable is expert judgment in state risk governance—and can it be systematically improved? Governments rely on national risk assessments to allocate resources across existential threats such as pandemics, war, and climate change. These assessments are produced by expert communities embedded in public bureaucracies, yet they depend fundamentally on human judgment. While states rely on expert forecasting to manage risk, the cognitive foundations of that expertise remain poorly accounted for institutionally. This paper provides the first experimental evidence on the cognitive biases of national-level risk analysts, a professional group at the core of contemporary risk governance. We focus on two biases with particularly severe implications for policy: confirmation bias, which leads analysts to seek and overweight evidence that supports prior beliefs, and bias blind spot, the tendency to perceive others as biased while viewing one’s own judgments as objective. Both biases can distort probabilistic risk assessments, yet, like other forms of cognitive bias, until now they have never been measured systematically among the officials who produce national risk forecasts. We conducted a preregistered experiment involving more than half of all contributors to a national risk assessment in a European country, alongside a matched comparison group of graduate students. Participants completed validated scales measuring confirmation bias and bias blind spot in both general decision tasks and risk-specific judgments related to national threats. Between pre- and post-tests, all participants received a one-shot debiasing intervention consisting of a short training on cognitive biases and a structured group exercise. The results speak to debates on state capacity, technocracy, and epistemic governance. First, we find that both groups exhibit confirmation bias, both in risk and general domains. This leaves room for improvement. Second, national risk analysts exhibit less confirmation bias than students, not only within their domain of expertise but also in unrelated decision contexts. This contradicts the dominant assumption in the expertise literature that cognitive advantages are narrowly domain-specific, and instead suggests that professional experience in risk governance produces broader epistemic discipline. Third, both experts and novices display a strong bias blind spot, indicating that even highly trained analysts systematically underestimate their own susceptibility to error. Most importantly for public policy, the debiasing intervention substantially reduced confirmation bias in both analysts and students, with no significant difference between the two groups. This demonstrates that expert judgment in state risk institutions is not only fallible but also malleable through low-cost cognitive interventions. The effects hold across general and risk-specific domains and are robust to differences in education, experience, and prior familiarity with behavioral science. These findings have implications for the institutional design of risk governance. They show that expert-based forecasting can be systematically improved. Rather than replacing expert judgment with algorithms or markets, governments can strengthen state capacity by investing in the cognitive infrastructure of their expert organizations. In an era of global risks and contested expertise, designing institutions that actively manage cognitive bias may be as important as improving data, models, or formal procedures.