A key provision in the EU’s AI rulebook to assess the risks of foundation models such as ChatGPT may become obsolete within a year due to the pace of developing technologies, experts told Euractiv.
The EU’s AI Act is the world’s first comprehensive rulebook to regulate artificial intelligence. It does so based on its capacity to cause harm. After years of intense negotiations, it got the final stamp of approval by the European Parliament with an overwhelming majority on Wednesday (13 March).
The AI Act distinguishes the risks posed by foundation models based on the computing power used to train them. Foundation models, also called general-purpose AI, are particularly powerful due to their myriad uses.
The law defines a threshold of 10^25 floating point operations per second (FLOPs), a measurement of the performance of a computer. AI products that exceed this threshold are deemed to bring “systemic risk” and are regulated more stringently.
Technology outpaces regulation
However, Dragoş Tudorache, an MEP who acted as co-rapporteur on the file, said that the rules may soon become obsolete.
“By the time the rules for foundation models become applicable [12 months from now] either there will be four or five big models that will pass this threshold […] or a new leap in technology [will bring down the computational requirements for powerful foundation models],” he told Euractiv.
Right now, likely only Google’s Gemini and OpenAI’s latest ChatGPT models pass that threshold, Tudorache said.
In a separate interview with Euractiv, Oxford Internet Institute Professor of Technology and Regulation Sandra Wachter agreed with this assessment.
Recognizing the dizzying speed at which AI technology is developing, the AI Act comes with a certain amount of flexibility including when it comes to foundation models, said Tudorache.
The flops threshold “confuses compute with risk”, which are separate things, Wachter told Euractiv. Regardless of their size, these models have all sorts of risks around bias, misinformation, data protection and hallucinations, she said.
In the meantime, engineers in Silicon Valley and beyond are working to reduce the heavy computational lift, incentivised chiefly not by the AI Act, but by cost control.
Influential US venture capital firm a16z called the training of these models “one of the more computationally intensive tasks mankind has undertaken so far”. As such, companies are trying to bring down the massive costs associated with channelling that type of computing power.
The future of the threshold
The flops classification is considered only an initial step and can be reviewed by the Commission along with other definitions and categorisations of the AI Act. There is no pre-determined timeline for reviewing the flops criterion, it’s up to the Commission to do so through a delegated act.
However, the path to review will not be easy.
The inclusion of foundation models into the legislation was not initially envisioned, but became imperative with the explosion of ChatGPT in 2022. How to regulate them was a matter of tough contention, so much so that negotiations almost hit the brakes in November 2023 over this issue.
The part of the AI Act on foundation models “was a result of lobbying” and its impact is still unknown, Merve Hickok, president and research director of the Center for AI and Digital Policy, told Euractiv.
Take the Survey at https://survey.energynewsbeat.com/
ENB Top News ENBEnergy DashboardENB PodcastENB Substack
Energy News Beat