The EU publishes the first draft of general purpose regulatory directives for AI models
On Thursday, the European Union published its first draft of a Code of Practice for general AI purposes (GPAI). The document, which won’t be finalized until May, sets out guidelines for risk management – and gives companies a blueprint to comply and avoid hefty fines. The EU’s AI legislation went into effect on August 1, but left room to nail down the details of the GPAI rules down the road. This draft (with TechCrunch) is the first attempt to clarify the expectations of those more advanced models, giving participants time to submit feedback and refine them before they enter.
GPAs are those trained with a total computing power of more than 10Β²β΅ FLOPs. Companies expected to fall under the EU guidelines include OpenAI, Google, Meta, Anthropic and Mistral. But that list could grow.
This document addresses several key areas for GPAI practitioners: transparency, copyright compliance, risk assessment and technical/governance risk mitigation. This 36-page draft covers a lot of ground (and will likely balloon a lot before it’s finalized), but a few highlights stand out.
The code emphasizes transparency in AI development and requires AI companies to provide information about the web browsers they used to train their models – a major concern for copyright owners and creators. The risk assessment phase aims to prevent cybercrime, widespread racism and loss of control over AI (the “go-wrong” moment in a million bad sci-fi movies).
AI makers are expected to use a Safety and Security Framework (SSF) to break down their risk management policies and proportionately reduce their system risks. The rules also cover technical areas such as protecting model data, providing fail-safe access controls and continually reassessing their performance. Finally, the management class strives for accountability within the companies themselves, which requires continuous risk assessment and bringing in external experts where needed.
As with other EU laws related to technology, companies that do not comply with the AI ββLaw can expect steep penalties. They can be fined up to β¬35 million (currently $36.8 million) or up to seven percent of their annual worldwide profits, whichever is higher.
Contributors are invited to submit feedback through the dedicated Futurium forum on November 28 to help shape the next draft. The rules are expected to be finalized by May 1, 2025.
Source link