Joshua Krook (University of Antwerp - Faculty of Law, Business & Law Research Group), Eike Schneiders (University of Nottingham), Tina Seabrooke (University of Southampton), Natalie Leesakul (University of Nottingham - Horizon Digital Economy Research Institute), & Jeremie Clos (Independent) have posted Large Language Models (LLMs) for Legal Advice: A Scoping Review on SSRN. Here is the abstract:
Large Language Models (LLMs) represent a challenge to traditional legal services, creating new means by which clients and lawyers can access legal concepts, case law, and even generate legal advice. As LLM platforms such as ChatGPT, Gemini and LAMDA become accessible to the public, everyday citizens may come to rely on these platforms for legal advice, shifting their information-seeking behaviour away from traditional platforms, such as search engines and/or professional law firms. However, this shift will come with its own risks, including the possibility of LLMs providing false or misleading information. In this paper, we conduct a literature synthesis to better understand the ways in which LLMs are being used, or are envisioned to be used, to generate legal advice. The synthesis identifies a number of risks and benefits, including the risk of misleading information (hallucinations), and the benefits of reducing legal costs and standardising legal information. Data availability statement: No datasets were generated or analysed during the current study.