Beyond Big Brother: Regulating State Use of Facial Recognition Technology
Governance
Public Policy
Regulation
Comparative Perspective
Technology
Abstract
Artificial Intelligence (AI) revolutionizes public service and administration, including service delivery, policy-making, governance, and security. One of the emerging applications of AI in the public sector is biometrics, which refers to the use of biological or behavioural characteristics to identify or verify individuals. While biometrics have been historically used in police surveillance and forensic science, recent advancements in AI have expanded their deployment to various public spaces and daily life practices, such as unlocking our phones to shopping in retail stores. From autocracies to liberal democracies, AI-powered surveillance capabilities are becoming more common, transforming the ability of state surveillance. The proliferation of dataveillance and AI surveillance have reshaped surveillance practices, giving rise to "ubiquitous surveillance." Facial recognition technology (FRT) is increasingly used as a biometric method due to the unique and recognizable nature of faceprint, especially post-COVID-19. Hence, modern surveillance heavily relies on FRT, which is characterized as "Digital Panopticon or Superpanopticon" cameras that keep an eye on public spaces, automatic recognition, and real-time recognition systems used to analyze large amounts of data gathered. Recent discussions, particularly those surrounding AI-powered facial recognition systems, have raised questions about the accuracy of these systems, ethical values, privacy concerns, and mass surveillance. A preliminary literature review and recent empirical evidence emphasized a regulatory lacuna in government use of AI-powered biometrics applications in legal and policy contexts. The current research agenda characterizes the legitimacy of government deployment of FRT in the public sphere as a trade-off between privacy and public safety. Although there is much research on the cost-benefit balance of FRT by the state or focuses on the potential harms of FRT, limited research explores the regulatory framework of FRT. Some scholars and organizations call for comprehensive and permanent bans on using FRT for routine policing, mass surveillance, and discriminatory profiling, arguing that no regulatory system can prevent the abuse of FRT by law enforcement agencies and corporations. Others argued that deploying FRT across government services requires a piecemeal and case-by-case regulatory approach. Calls to regulate FRT are growing louder by the day, and civil liberties organizations are at the forefront of the rising resistance. However, there is less consensus on how to rein in FRT. Despite EU AI Act regulatory efforts, illustrations of mass deployment of biometric technology are expanding worldwide. While the EU AI Act sets out a de-facto regulation that could significantly reduce and control the deployment of real-time and remote biometrics, there are still 'exceptions' and regulatory lacuna in deploying technology in public space. Therefore, this research analyzes FRT policies and regulations in various jurisdictions, including China, the EU, the UK, and the US. Through several case studies, the use of FRT by authorities will be examined, including its application in law enforcement, border security, and government mobile apps. In conclusion, this study aims to provide a comprehensive perspective on the regulation of FRT. It will present dynamic and agile frameworks that offer insights into the potential future directions of FRT governance.