Sylvia Lu (University of Michigan Law School) has posted Regulating Algorithmic Harms (Florida Law Review (2025 forthcoming)) on SSRN. Here is the abstract:
In recent years, the rapid expansion of artificial intelligence (AI) innovations has led to a rise in algorithmic harms—harms emerging from AI operations that pose significant threats to civil rights and democratic values in today’s technological landscape. A facial recognition system for improving criminal detection wrongly collected sensitive personal data and flagged racial minorities as shoplifters. A risk-prediction algorithm adopted to identify patients denied medical treatment to Black individuals with poor health conditions. A social media algorithm intended to boost social engagement exacerbated addictive behavior and mental illness in teenagers. These harms are becoming increasingly ubiquitous yet often manifest in small and invisible forms, enabling them to aggregate while eluding regulatory oversight. Secretly and cumulatively, they affect millions to billions of individuals.
This Article constructs a legal typology to categorize these harms. It argues that there are four primary types of algorithmic harms: eroding privacy, undermining autonomy, diminishing equality, and impairing safety. Additionally, it identifies two aggravating factors—accountability paucity and algorithmic opacity—that cause these seemingly minor harms to escalate into significant problems by obstructing harm detection and correction. This Article then conducts case studies of relevant legal frameworks in the United States, the European Union, and Japan to assess the effectiveness of existing responses to algorithmic harms. The case studies reveal that these regulatory examples are insufficient; they either overlook certain types of harms or fail to consider their cumulative effects, thereby allowing problematic AI practices to circumvent legal obligations.
Drawing on these findings, this Article proposes three legal interventions to address algorithmic harms, each aims to mitigate primary harms by targeting aggravating factors. Refined harm-centric algorithmic impact assessments, which impose an obligation on AI developers to address the compounded harms, serve as a starting point for enhancing algorithmic accountability. While these assessments often have a collective focus and overlook individual differences, individual rights in terms of algorithmic systems provide enhanced control over AI applications that could lead to aggregated primary harms. The success of these tools relies on a set of disclosure duties designed to reduce algorithmic opacity in favor of increased harm awareness, especially in situations where AI use is associated with intangible yet far-reaching harms. Taken altogether, this harm-centric procedural approach advances the conversation about the legal definition of algorithmic harms, the boundaries of AI law, and viable approaches to effective algorithmic governance.
Highly recommended.