ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

How do people reason about social protection in the age of AI?

Comparative Politics
Political Economy
Social Policy
Welfare State
Survey Experiments
Technology
Matthias Haslberger
Universität St Gallen
Matthias Haslberger
Universität St Gallen

Abstract

As the speed with which generative AI is penetrating into the workplace dwarfs earlier technologies such as the personal computer or the internet itself, people are increasingly going to experience AI as a coworker or competitor. This experience, together with emerging narratives about AI taking jobs, is likely to raise the salience of economic security and how it can be protected through social policies. We therefore investigate how people weigh conflicting information from experts and personal experience with AI when forming views on social policy. We furthermore study whether different kinds of information matter for views on different kinds of policies. We will field our study in the US and Germany, approximating a most different systems design with two archetypical liberal and conservative welfare states. Crucially, with a representative sample of 3'000 respondents per country, this study will have the sample quality and statistical power for reliable inference. In the experimental part, participants will receive two video treatments: an `expert information treatment' and an `experience treatment'. In the experience treatment, people perform a task and we test how they respond to a subsequent demonstration of the potential of AI as a `coworker’ or `competitor’. The expert information treatment provides high-level predictions about the overall economic consequences of generative AI. Both treatments have a `positive’ and a `negative’ arm and the videos will be about 90 seconds long. By randomly exposing participants to a positive or negative experience with AI, and positive or negative expert information, which may therefore pull in opposite directions, the experiment allows us to distinguish between two mechanisms: is it the activity of engaging with the technology itself, or the exposure to narratives about technology that shapes people's views? In the case of latter, we would expect to see the effect of the expert information treatment to dominate and the joint effect of positive information and a negative experience to be positive, and vice versa. This question matters from a substantive standpoint, since we know surprisingly little about how people weigh different kinds of information when forming political opinions. Additionally, the study promises new methodological insights into why information experiments often fail to induce the hypothesised effects. We distinguish between three different facets of social policies in response to AI: policy objective, policy target, and policy level. Policy objective describes what the policy intends to achieve. Here we rely on the common distinction between compensation, investment, and steering policies. The policy target dimension captures how the objective is meant to be achieved: redistribution to or redistribution from. Finally, the policy level describes whether a policy formulates a concrete set of actions or whether it describes an abstract goal. We hypothesize that different types of information differ in their effect on support for policies based on these dimensions. For example, support for steering policies might be influenced by abstract expert information, while experience matters for compensation policies which are directly linked to self-interest. Overall, this study promises important insights into the political consequences of changes in workplace technology.