Katie Szilagyi (University of Manitoba) has posted Regenerating Justice: ChatGPT and the Legal Minefield of Generative AI on SSRN. Here is the abstract:
Generative AI (GenAI) has taken the Internet by storm. The simple interfaces of systems like OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and others allow non-technical individuals to easily interact with artificial intelligence (AI) models, enabling natural language processing (NLP) activities that previously required considerable technical skill. After many years of optimistic forecasts and fringe enthusiasts, AI has installed itself into the mainstream dialogue via chatbot. While some observers applaud the possibilities enabled by NLP, focusing on their potential to automate mundane tasks, produce comprehensive research summaries, and draft preliminary versions of written documents, others have reservations.
This paper adopts an automation bias lens to cast doubt on the growing claims that GenAI is a transformational tool for legal industry. In this context, automation bias refers to the well-known psychological phenomenon in which human decision-makers unwittingly defer to automated processes, flowing from overreliance on the accuracy of the automation. Even well-meaning individuals aiming to keep a “human-in-the-loop” of any automated decision can fall prey to this phenomenon. I argue that that unthinking use of today’s GenAI systems will muddy the waters of human expertise, which has serious consequences in the legal realm. Using GenAI to produce legal knowledge risks the creation and propagation of sourceless information, undermining law’s key social systematizing function.
Recommended.