Janneke Gerards (Utrecht University - Faculty of Law; Institute for Jurisprudence, Constitutional and Administrative Law) & Frederik Zuiderveen Borgesius (iHub, interdisciplinary research hub on Security, Privacy, and Data Governance) have posted Protected Grounds and the System of Non-Discrimination Law in the Context of Algorithmic Decision-Making and Artificial Intelligence on SSRN. Here is the abstract:
Algorithmic decision-making and similar types of artificial intelligence (AI) may lead to improvements in all sectors of society, but can also have discriminatory effects. While current non-discrimination law offers people some protection, AI decision-making presents the law with several challenges. For instance, AI can generate new categories of people based on seemingly innocuous characteristics, such as web browser preference or apartment number, or more complicated categories combining many data points. Such new types of differentiation could evade non-discrimination law, as browser type and house number are not protected characteristics, but such differentiation could still be unfair, for instance if it reinforces social inequality.
This paper concerns the following question: What system of non-discrimination law can best be applied to AI, knowing that AI can differentiate on the basis of characteristics that do not correlate with protected grounds of discrimination such as ethnicity or gender, and in light of the particular characteristics of the different systems of non-discrimination law? To answer this question, this paper analyses the current loopholes in the protection offered by non-discrimination law and explores the best way for lawmakers to approach AI-driven differentiation. While we focus on Europe, the conceptual and theoretical focus of the paper can make it useful for scholars and policymakers from other regions too, as they encounter similar problems with AI.