ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Success and Failure of Notified Bodies as Regulatory Intermediaries in European AI Governance

European Union
Institutions
Public Policy
Regulation
Technology
Michael Sierra
Hebrew University of Jerusalem
Michael Sierra
Hebrew University of Jerusalem

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

The European Union’s Artificial Intelligence Act assigns a pivotal role to notified bodies, which are independent third-party assessors responsible for evaluating high-risk AI systems. These intermediaries are expected to transform legal norms of fairness, transparency, and non-discrimination into verifiable technical standards. Yet the ability of notified bodies to detect and mitigate algorithmic bias remains uncertain. This paper examines when and why such intermediaries succeed or fail in governing bias, through a comparative analysis of two prominent conformity-assessment actors: DEKRA Certification GmbH and TÜV SÜD Product Service GmbH. Drawing on Regulatory Intermediary Theory (Abbott, Levi-Faur, & Snidal, 2017), the study conceptualizes notified bodies as trust intermediaries that mediate between regulators and AI developers. Using document analysis, interview material, and policy mapping, it traces how institutional design, epistemic capacity, and independence condition their effectiveness. DEKRA represents an emergent success model: although not yet designated under the AI Act, it has proactively developed bias-testing and model-robustness procedures through initiatives such as Germany’s AI Quality & Testing Hub and participation in ISO/DIN standardization, and it is a notified body for other directives. Its technical competence and coordination with regulators illustrate how intermediaries can translate ethical principles into operational audit tools. TÜV SÜD, by contrast, exemplifies the vulnerabilities of delegated regulation. Its prior experience in the PIP breast-implant scandal and ongoing challenges in AI medical device certification expose structural dependencies, limited algorithmic bias expertise, and opaque audit methodologies. The paper argues that notified bodies will earn trust in AI governance only when they combine institutional autonomy, epistemic authority, and bias-sensitive audit design. Absent these, conformity assessment risks legitimizing biased AI systems under the guise of compliance.