Robert Diab (Thompson Rivers University - Faculty of Law) has postedToo Dangerous to Deploy? The Challenge Language Models Pose to Regulating AI in Canada and the EU (University of British Columbia Law Review, Forthcoming) on SSRN. Here is the abstract:
Canada and the European Union are at the forefront of AI regulation in tabling bills, the Artificial Intelligence and Data Act and the Artificial Intelligence Act (respectively), that would apply to commercial entities deploying AI systems, including those based on large language models, such as GPT-4. Both bills address the risk of harm to which AI systems give rise by imposing on their providers obligations to identify and mitigate risk, and civil or criminal liability for failing to do so where harm is caused. Both bills are premised on the ability to quantify in advance and to a reasonable degree the nature and extent of the risk a system poses. This paper canvases evidence that raises doubt about whether providers or auditors have this ability. It argues that while providers can take measures to mitigate risk to some degree, remaining risks are substantial, but difficult to quantify, and may persist for the foreseeable future due to the intractable problem of novel methods of jailbreaking and limits to model interpretability. These facts complicate the attempt to regulate language models through a risk-mitigation approach, but they do support efforts to regulate risk now rather than waiting to obtain further clarity on the nature and extent of risk.