Design-by-Analogy: Can past certification use-cases inform ethical AI certification?
Governance
Political Economy
Regulation
Qualitative Comparative Analysis
Comparative Perspective
Technology
Abstract
Artificial intelligence (AI) technologies' rapid spread and use present national and transnational governance challenges (Dafoe, 2018: 2019). Governments worldwide are beginning to develop and implement national AI strategies, legislation and regulation, and policies to realize AI’s benefits while mitigating potential harms (OECD.AI, 2023). Many of these initiatives, however, merely address national AI governance challenges and insufficiently include attention to instruments suited for AI’s transnational governance challenges. To advance thinking on these transnational AI governance challenges, scholars have turned to other sectors for insights, like proposing the potential for certification programs based on experiences in environmental stewardship or global agri-food (Cihon, 2019; Cihon et al., 2021; Dafoe, 2018). Our paper builds off this line of inquiry by examining the structure, function, and enforcement mechanisms of certification across three use-cases in environment, agriculture, and international banking to inform AI certification through design-by-analogy (Auld, 2014; Hatanaka et al., 2005; Helleiner, 2010).
Certification programs vary in their structure, functions, and enforcement mechanisms. Certain programs are structured around a nonprofit that sets a standard for a specific problem and then creates a certification program around that standard which includes training and accrediting auditors to audit against their standard’s requirements and awarding those organizations that pass these audits with a certification mark. Other programs are structured around an international organization with an overarching body and several standard-setting committees below them to develop standards and send them up to the overarching body for enforcement. Functionally speaking, certification programs tend to work the same, an organization seeks out and applies for certification, that organization is then assessed against that certification program’s requirements, usually by an accredited third-party auditor, then, if they pass that audit, are awarded the certification mark. Enforcement mechanisms often include some form of membership revocation and/or certification suspension when (often repeated) noncompliance occurs.
Extensive research documents the operation of certification across use-cases – from environmental stewardship to international banking. To what extent can what we know about the structures, functions, and enforcement of these initiatives (and their consequences and (in)effectiveness) inform emerging AI certification programs like the Responsible AI Institute’s Certification Program, Denmark’s D-seal program, or Ireland or Malta’s forthcoming AI certification programs (as called for in their national AI strategies), along with other emerging AI certification programs? This paper aims to answer that very question by examining use-cases to identify areas of commonality and difference between past experiences and the current AI governance challenges. This paper will proceed in four parts. First, it will provide a three-part literature review on private governance; certification programs, their limitations, and theories to address those limitations; and emerging AI certification programs. Second, it will justify and detail the case study methodology. Third, focusing on certification cases for environmental stewardship, global agrifood, and international banking, the paper will explore their structure, function, and enforcement of certification to glean implications for AI. Lastly, a discussion is given to tie these case study results back to the research question, private governance literature, and how it can inform emerging AI certification programs.