First-Mover Advantage and Cognitive Enhancement: Lessons from AI Competition
International Relations
Policy Analysis
Political Competition
Political Psychology
Public Policy
Negotiation
Decision Making
Policy Change
To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.
Abstract
In this paper, I develop and defend what I call the Argument from Competition. I begin by examining a framework that has become influential in the policy discourse around novel AI systems. This argument, which I reconstruct in standard form, holds that (i) AI development is expected to yield substantial economic and military gains for the nations that achieve it; (ii) those gains would allow such nations to catch up with or overtake their strategic rivals; and therefore (iii) liberal democracies should not allow a period in which rivals reap these benefits first and must, consequently, strive to develop AI before them. Versions of this reasoning are already articulated by political and policy figures (Blair and Hague 2025)
I do not assess the argument’s strength in the AI case. Rather, I use it as a template and ask whether the same premises succeed when applied to cognitive enhancement—technologies aimed at improving cognitive capacities such as concentration, working memory, and learning speed beyond normal healthy ranges. I argue that they do. If enhanced cognition increases productivity and accelerates innovation, as they appear to do—by the same logic that motivates the AI race—these technologies will generate major economic and strategic benefits. Rivals who develop them first will gain a notable advantage. Liberal democracies therefore face the same prudential imperative: to make a serious attempt to develop cognitive enhancement, or risk falling behind.
I support the key premises by drawing on research linking cognitive ability to economic growth (Hanushek & Woßmann, 2010; 2012) and showing that even modest average gains can produce large national effects when applied at scale. The feasibility premise—the premise that developing these technologies is possible/likely, through suitable investment, research and development—is bolstered by the development of ‘backdoor’, or ‘accidentally developed’ pharmacological enhancers, such as Modafinil and Ritalin, and by emerging technological breakthroughs in brain–computer interfaces, non-invasive stimulation, and genetic editing—all of which indicate that significant enhancement is technically achievable. The final premise — that development would not entail “unpalatable consequences”—is defended by addressing two main concerns.
First, the coercion objection holds that national-level benefits may depend on widespread enhancement, which risks indirect pressure on citizens to participate. I respond that liberal democracies already tolerate—and even mandate—coercive enhancement in the form of education, which fits the same functional definition. Additionally, I highlight existing counterarguments that removing the opportunity for anyone to enhance, through not developing enhancers, limits the autonomy of those who would like to enhance (Rouse 2013). Second, the ultimate-harm objection (Persson & Savulescu 2008) warns that enhancement could accelerate technology and thus amplify existential risk. I argue that this worry either presupposes a level of international trust and coordination that is currently unrealistic or underestimates the dangers of stifling scientific progress itself.
The upshot is a normative warning and a policy proposal: if the Argument from Competition has power in the AI domain, then its premises also compel liberal democracies to invest—responsibly but urgently—in cognitive enhancement research.