Home » EU AI Act: Latest draft Code for AI model makers tiptoes towards gentler guidance for Big AI

EU AI Act: Latest draft Code for AI model makers tiptoes towards gentler guidance for Big AI

by Carl Nash
0 comment


Ahead of a May deadline to lock in guidance for providers of general purpose AI (GPAI) models on complying with provisions of the EU AI Act that apply to Big AI, a third draft of the Code of Practice was published on Tuesday. The Code has been in formulation since last year, and this draft is expected to be the last revision round before the guidelines are finalized in the coming months.

A website has also been launched with the aim of boosting the Code’s accessibility. Written feedback on the latest draft should be submitted by March 30, 2025.

The bloc’s risk-based rulebook for AI includes a sub-set of obligations that apply only to the most powerful AI model makers — covering areas such as transparency, copyright, and risk mitigation. The Code is aimed at helping GPAI model makers understand how to meet the legal obligations and avoid the risk of sanctions for non-compliance. AI Act penalties for breaches of GPAI requirements, specifically, could reach up to 3% of global annual turnover.

Streamlined

The latest revision of the Code is billed as having “a more streamlined structure with refined commitments and measures” compared to earlier iterations, based on feedback on the second draft that was published in December.

Further feedback, working group discussions and workshops will feed into the process of turning the third draft into final guidance. And the experts say they hope to achiever greater “clarity and coherence” in the final adopted version of the Code.

The draft is broken down into a handful of sections covering off commitments for GPAIs, along with detailed guidance for transparency and copyright measures. There is also a section on safety and security obligations which apply to the most powerful models (with so-called systemic risk, or GPAISR).

On transparency, the guidance includes an example of a model documentation form GPAIs might be expected to fill in in order to ensure that downstream deployers of their technology have access to key information to help with their own compliance.

Elsewhere, the copyright section likely remains the most immediately contentious area for Big AI.

The current draft is replete with terms like “best efforts”, “reasonable measures” and “appropriate measures” when it comes to complying with commitments such as respecting rights requirements when crawling the web to acquire data for model training, or mitigating the risk of models churning out copyright-infringing outputs.

The use of such mediated language suggests data-mining AI giants may feel they have plenty of wiggle room to carry on grabbing protected information to train their models and ask forgiveness later — but it remains to be seen whether the language gets toughened up in the final draft of the Code.

Language used in an earlier iteration of the Code — saying GPAIs should provide a single point of contact and complaint handling to make it easier for rightsholders to communicate grievances “directly and rapidly” — appears to have gone. Now, there is merely a line stating: “Signatories will designate a point of contact for communication with affected rightsholders and provide easily accessible information about it.”

The current text also suggests GPAIs may be able to refuse to act on copyright complaints by rightsholders if they “manifestly unfounded or excessive, in particular because of their repetitive character.” It suggests attempts by creatives to flip the scales by making use of AI tools to try to detect copyright issues and automate filing complaints against Big AI could result in them… simply being ignored.

When it comes to safety and security, the EU AI Act’s requirements to evaluate and mitigate systemic risks already only apply to a subset of the most powerful models (those trained using a total computing power of more than 10^25 FLOPs) — but this latest draft sees some previously recommended measures being further narrowed in response to feedback.

US pressure

Unmentioned in the EU press release about the latest draft are blistering attacks on European lawmaking generally, and the bloc’s rules for AI specifically, coming out of the U.S. administration led by president Donald Trump.

At the Paris AI Action summit last month, U.S. vice president JD Vance dismissed the need to regulate to ensure AI is applied safety — Trump’s administration would instead be leaning into “AI opportunity”. And he warned Europe that overregulation could kill the golden goose.

Since then, the bloc has moved to kill off one AI safety initiative — putting the AI Liability Directive on the chopping block. EU lawmakers have also trailed an incoming “omnibus” package of simplifying reforms to existing rules that they say are aimed at reducing red tape and bureaucracy for business, with a focus on areas like sustainability reporting. But with the AI Act still in the process of being implemented, there is clearly pressure being applied to dilute requirements.

At the Mobile World Congress trade show in Barcelona earlier this month, French GPAI model maker Mistral — a particularly loud opponent of the EU AI Act during negotiations to conclude the legislation back in 2023 — with founder Arthur Mensh claimed it is having difficulties finding technological solutions to comply with some of the rules. He added that the company is “working with the regulators to make sure that this is resolved.”

While this GPAI Code is being drawn up by independent experts, the European Commission — via the AI Office which oversees enforcement and other activity related to the law — is, in parallel, producing some “clarifying” guidance that will also shape how the law applies. Including definitions for GPAIs and their responsibilities.

So look out for further guidance, “in due time”, from the AI Office — which the Commission says will “clarify … the scope of the rules” — as this could offer a pathway for nerve-losing lawmakers to respond to the U.S. lobbying to deregulate AI.



Source link

You may also like

Leave a Comment

About Us

Advertisement

Latest Articles

© 2024 Technewsupdate. All rights reserved.