Irina Carnat (Scuola Superiore Sant'Anna di Pisa) has posted Human, All Too Human: Accounting for Automation Bias in Generative Large Language Models (International Data Privacy Law, 2024) on SSRN. Here is the abstract:
The paper examines the accountability gap arising from potential user overreliance on the outputs of generative Large Language Models (LLMs) in decision-making processes due to automation bias, favored by anthropomorphism and the phenomenon of factually incorrect text generation, known as 'hallucination'. It critiques the techno-solutionism proposing a human-in-the-loop solution, arguing that solving the 'hallucination' issue from a purely technical perspective can paradoxically exacerbate user overreliance on algorithmic outputs due to automation bias. It also critiques the regulatory optimism in human oversight, challenging its adequacy in effectively addressing automation bias, by comparing the Artificial Intelligence Act's Article 14 with the notion of 'meaningful human control' under the General Data Protection Regulation. It finally proposes a comprehensive socio-technical framework that integrates human factors, promotes AI literacy, ensures appropriate levels of automation for different usage contexts, and implements cognitive forcing functions by design. The paper cautions against overemphasizing human oversight as a panacea and instead advocates for implementing accountability measures along the entire AI system's value chain to appropriately calibrate user trust in generative LLMs.