The EU publishes the first draft of regulatory guidance for general purpose AI models

Date:

Share:

[ad_1]

On Thursday, the European Union published its first draft of a Code of Practice for general purpose AI (GPAI) models. The document, which won’t be finalized until May, lays out guidelines for managing risks — and giving companies a blueprint to comply and avoid hefty penalties. The EU’s AI Act came into force on August 1, but it left room to nail down the specifics of GPAI regulations down the road. This draft (via TechCrunch) is the first attempt to clarify what’s expected of those more advanced models, giving stakeholders time to submit feedback and refine them before they kick in.

GPAIs are those trained with a total computing power of over 10²⁵ FLOPs. Companies expected to fall under the EU’s guidelines include OpenAI, Google, Meta, Anthropic and Mistral. But that list could grow.

The document addresses several core areas for GPAI makers: transparency, copyright compliance, risk assessment and technical / governance risk mitigation. This 36-page draft covers a lot of ground (and will likely balloon much more before it’s finalized), but several highlights stand out.

The code emphasizes transparency in AI development and requires AI companies to provide information about the web crawlers they used to train their models — a key concern for copyright holders and creators. The risk assessment section aims to prevent cyber offenses, widespread discrimination and loss of control over AI (the “it’s gone rogue” sentient moment in a million bad sci-fi movies).

AI makers are expected to adopt a Safety and Security Framework (SSF) to break down their risk management policies and mitigate them proportionately to their systemic risks. The rules also cover technical areas like protecting model data, providing failsafe access controls and continually reassessing their effectiveness. Finally, the governance section strives for accountability within the companies themselves, requiring ongoing risk assessment and bringing in outside experts where needed.

Like the EU’s other tech-related regulations, companies that run afoul of the AI Act can expect steep penalties. They can be fined up to €35 million (currently $36.8 million) or up to seven percent of their global annual profits, whichever is higher.

Stakeholders are invited to submit feedback through the dedicated Futurium platform by November 28 to help refine the next draft. The rules are expected to be finalized by May 1, 2025.

[ad_2]

Source link

━ more like this

Sends shares Q1 2026 business update and product progress

Sends reported Q1 2026 updates sharing news on digital cards, app redesign, ClearBank integration, and fintech industry recognition. Sends, a fintech platform operated by Smartflow...

We swipe our phones all day, and scientists just ranked which ones are the most tiring

We all know staring at your phone for hours isn’t great for mental health. But what about your fingers? Previously, researchers couldn’t measure...

Two suspects have been arrested for allegedly shooting at Sam Altman’s house

OpenAI CEO Sam Altman's house may have been the target of a second attack after San Francisco Police Department arrested two suspects for...

You Can Soon Buy a $4,370 Humanoid Robot on AliExpress

Listing consumer electronics on the internet's large ecommerce marketplaces is a key step in “democratizing” the products, allowing them to be purchased by...
spot_img