top of page

Resourcing the AI Office

  • vor 22 Stunden
  • 2 Min. Lesezeit

A strategic investment in Europe’s competitiveness and sovereignty


Cover image: Stock photograph from Pexels, used under a free licence.
Cover image: Stock photograph from Pexels, used under a free licence.

Executive Summary


Public trust is a decisive factor in the adoption of Artificial Intelligence. At a time when most people remain sceptical of AI, the EU AI Act has the potential to create legal certainty and boost trustworthy AI. This is particularly important given that today’s most advanced AI models cannot yet be consistently relied upon to behave as intended, remain within acceptable risk boundaries, or resist misuse. By addressing these concerns, the AI Act can accelerate the integration of AI across the economy, strengthen European competitiveness, govern increasingly severe emerging risks, and reinforce the EU’s technological and geopolitical sovereignty. Realising these benefits, however, depends on effective enforcement across the EU, including strong Commission-level enforcement by the EU AI Office of the Act’s general-purpose AI (GPAI) provisions.


This report assesses whether the EU AI Office has sufficient resources to fulfil its role, particularly in supervising general-purpose AI models. Within the AI Office, Safety Unit A3 is responsible for overseeing GPAI models with systemic risk; likely to pose the greatest technical, legal, and governance challenges under the AI Act. The report also identifies concrete pathways for strengthening the AI Office’s institutional and financial capacity.


Our analysis finds that Unit A3 is currently under-resourced relative to the rapid growth in the number of AI models of frontier providers it must supervise. When benchmarked against other EU regulators in critical sectors and the enforcement of the Digital Services Act, the AI Office’s projected staffing and budget levels appear inadequate for the enforcement demands it will face in the coming years.


Key recommendations:


  1. Scaling the AI Office’s GPAI supervisory capacity to a level comparable to DSA enforcement. As a benchmark, the AI Office should ensure resourcing of Unit A3 sufficient for a minimum of 160 staff by 2030, supported by a unit-level annual budget in the range of €50–60 million.


  2. Securing sustainable funding and capacity increases through a combination of:

    • the Digital Omnibus, by specifying additional staffing for the AI Office (beyond the already proposed 38 FTE increase), including at least 30 additional FTEs exclusively for the supervision of GPAI models by the end of 2027, ensuring a realistic path to the 2030 target of 160 staff

    • the 2028–2034 Multiannual Financial Framework, to explicitly embed appropriate administrative staffing, budgetary resources, and IT infrastructure for the AI Office in the EU’s long-term budget, in line with the EU’s digital leadership ambitions

    • the EU annual budget, to provide appropriate operational funding, including for tenders, and to respond to developments not foreseen in the MFF;

    • the evaluation of supervisory fees for providers of GPAI models with systemic risk, to establish a dedicated funding stream proportionate to regulatory workload

    • the exploration of an AI services levy for large AI providers, modelled on the 2018 Digital Services Tax proposal


Strengthening the AI Office is not only a matter of regulatory enforcement. It is a strategic investment in Europe’s competitiveness, technological sovereignty, and long-term ability to govern one of the most consequential technologies of the coming decades.


Read the full paper:



 
 

Pour Demain

Europabüro

Clockwise

Avenue des Arts - Kunstlaan 44

1040 Brüssel

Belgien

Büro Bern

Marktgasse 46

3011 Bern

(Postanschrift: Mülhauserstrasse 75, 4056 Basel)

Kontakt

Folgen Sie uns auf:

  • 2
  • 4
bottom of page