Theodoros Karathanasis (MIAI - AI Regulation Chair) has posted Addressing Dual-Use Risks in the EU: The Open-Source AI Foundation Models Case on SSRN. Here is the abstract:
Dual-use items with both civilian and military applications of emerging technologies have been a concern since 2018, especially with the potential escalation of Artificial Intelligence (AI) driven cyberspace skirmishes into conventional warfare. The European Union has made clear its intention to regulate General Purpose AI (GPAI) models by including them in the legislative debates of the forthcoming Regulation laying down harmonized rules on AI (AI Act). Neither the AI Act nor the EU's export control regime, which deals among others also with dual-use risks, seems however to address the weaponization risks of open-source AI foundation models, compared to the United States (U.S.). The aim of this paper is to inform policy makers of the need to mitigate the risks of foreign access to such models for malicious use. Developing incentives among AI developers toward "selfdestructing models" could be the solution to eliminates the risk of the code being intercepted and used maliciously in the future. Given that the AI Act was passed in May 2024 and no more changes can be done anymore the upcoming AI Act, this paper concludes that the AI Office will need to address the dual-use risks of open-source GPAI models in the margins of the AI Act, which may turn out to be ineffective at EU level.