Accepting Algorithmic Assessments - Under what conditions do students and teachers accept the use of AI assessment tools?
Education
Experimental Design
Technology
To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.
Abstract
In recent years, the adoption of AI tools has surged in a variety of sectors with the goal of increasing productivity and efficiency, but also the accuracy and reliability of decision-making. Also in the field of higher education, AI tools offer promises of potential efficiency gains and remedies against flaws in human judgement. For example, AI assessment tools could be deployed to help teachers assess student assignments. This is a under-discussed topic compared with (valid) concerns about how uncritical or even unethical use of generative AI, may negatively impact students' learning outcomes.
The potential benefits of using such AI assessment tools are plentiful. They may provide faster feedback for students, and free up time for teachers to engage in other teaching or research activities. They may also provide more consistent assessments, avoiding both between- and within-teacher variation in assessments as well as human biases that teachers may (unconsciously) hold against some students.
The actual adoption and deployment of AI assessment tools, however, will largely be dependent on the extent to which the usage of such tools are seen as legitimate and acceptable by both students and teachers. Despite the aforementioned potential benefits, there are many valid reasons for both groups to be skeptical against AI assessments. AI systems have been found to replicate biases against minorities and vulnerable groups, producing discriminatory outcomes. Furthermore, questions have been raised about the transparency and accountability of AI decisions, with uncertainties around the allocation of responsibility for AI-assisted decisions, as well as difficulties with explaining how complex AI systems reach their decisions. Moreover, there may be concerns about data privacy, and how the (personal) data collected by AI systems are stored. It is unclear how these conditions affect the willingness of both students and teachers to accept the usage of AI assessment tools.
This study aims to address this knowledge gap. Through the use of a discrete choice experimental design, we aim to test under what conditions students and teachers would be willing to accept the usage of AI assessment tools. A multi-disciplinary research team from eight different Dutch universities (VU Amsterdam, Leiden University, Maastricht University, Open University, Tilburg University, University of Eindhoven, University of Twente, and Utrecht University) will disseminate the survey to the student and teacher populations of their respective universities, providing a large sample for testing the effects of five different conditions (motivation for introduction of AI tool, scope of assessment, teacher involvement, transparency measure, and data storage) on students' and teachers' willingness to accept AI assessment tools. The aim is to not only address a scientific knowledge gap, but to provide policy relevant scientific evidence that could be used to inform university policy and strategy pertaining to the adoption and usage of AI tools in higher education in general, and AI assessment tools in particular.