Install this application on your home screen for quick and easy access when you’re on the go.
Just tap then “Add to Home Screen”
Tuesday 14:00 - 15:30 BST (28/04/2026)
This lecture draws on the research produced by Hannah Waight, Eddie Yang, Yin Yuan, Solomon Messing, Margaret E. Roberts, Brandon Stewart, and Joshua Tucker. Millions of people around the world query large language models for information. While several studies have compellingly documented the persuasive potential of these models, there is limited evidence of who or what influences the models themselves, leading to a flurry of concerns about which companies and governments build and regulate the models. We show through six studies that government control of the media already influences the output of large language models via their training data. To understand the specific mechanism of how government control can influence LLMs, we begin with a case study of China's media. The combination of influence and persuasive potential suggests the troubling conclusion that states and powerful institutions have increased strategic incentives to leverage media control in the hopes of shaping large language model output.