top of page

Public Statement on the Final Code of Practice

  • jimmyfarrell9
  • Jul 11
  • 3 min read

Over the past year, Pour Demain has worked closely with the AI Office, the independent expert Chairs and Vice Chairs, and other plenary participants, towards the world’s first governance tool for powerful AI models.


The Berlaymont: European Commission. Unsplash.
The Berlaymont: European Commission. Unsplash.

The final Code of Practice, a compliance tool for general-purpose AI (GPAI) providers under the EU AI Act, has been published. This brings to a conclusion the first stage of a monumental effort in democratic governance; setting the stage globally for how the most powerful AI models can be regulated, and serving as a platform for trusted AI adoption in Europe.  


Involving a large number of expert stakeholders, across all fields, and thanks to the hard work of the thirteen independent Chairs and Vice Chairs, the resulting Code was drawn up in impressive time, considering usual timelines for industry standards. We commend the AI Office for facilitating and navigating this complex process. 


The drafting process ran relatively as planned for the first eight months, before the initial deadline of May 2nd was pushed back. Multiple rounds of feedback from all participants, as well as provider-only workshops, maintained an appropriate balance, ensuring that the practical needs of industry were met without compromising on the objectives of the AI Act to protect European citizens and business. 


However, it must also be stated that the final few months saw a shift in this process from  participatory and open, to exclusive and opaque. For comparison, there were eighteen provider-only workshops, compared to four specialised workshops for other stakeholders, and one specifically for civil society. This inevitably has resulted in a final Code noticeably weaker than previous drafts. It remains, however, a step in the right direction towards meaningful GPAI governance; a process we look forward to maintaining an active and collaborative role in. 


Important inclusions in the final code include:


  • Mandatory external assessments: Under Appendix 3.5, providers of GPAI with systemic risk (GPAISR) will need to undergo independent external model evaluations unless the model poses similar risk to other models already on the market, or unless an adequate third party is not found. Reflecting voluntary commitments and existing industry practices, this is an essential inclusion with legal basis in the AI Act.  


  • Robust serious incident reporting: Under Commitment 9, GPAISR providers will need to keep track of relevant information regarding serious incidents involving their models, inform authorities of relevant information (including root cause analysis by state-of-the-art techniques according to Recital A), and abide by clearly defined reporting timelines.


  • Re-inclusion of public transparency: Under Measure 10.2, providers will need to publish their Safety and Security Framework and Model Reports. However, this comes with multiple disappointing caveats. The measure only calls for summarised versions, allowing significant redactions, especially for model reports, as well as broad exclusionary criteria. 


Important omissions include:


  • Pre-deployment reporting and model-specific adequacy assessment: Apart from notifying the AI Office about training runs, this final Code confirms that no model specific reporting will be required pre-deployment, meaning highly capable and risky models could be released in the EU without necessary checks and balances.  This underscores the need for the AI Office to rapidly upscale, to ensure the necessary regulatory resources to enforce the AI Act’s GPAI rules. 


  • Whistleblower protections: Although existing in previous drafts, commitments in the final draft only refer to annual reminders of whistleblower protection policy as an example indicator of a healthy risk culture.


  • Serious incident response readiness: Already removed from draft three, it has remained removed despite being common place in comparable safety-critical sectors. Whilst serious incident reporting is mandatory, as well as reporting corrective measures, such correct measures may be made up on the spot. 


Although this is a significant first step, the work to ensure safe, ethical and reliable general-purpose AI in Europe is only just beginning. With frontier model capabilities, and accompanying risks, developing at record speed, from reasoning-models to multi-agent scaffolding and beyond, this Code of Practice finalisation has come at the right time. 


It is pivotally important for the review of these commitments to be swift in light of emergencies, accidents and technological advancements. This must also be a multi-stakeholder participatory effort, involving civil society, academia and downstream industry, ensuring democratic oversight. Finally, the newly established AI Office must grow quickly into its new role, receiving necessary reinforcements from the EU’s Annual Budget and Multiannual Financial Framework


Pour Demain will continue to provide support, through our research and targeted policy recommendations.  


For questions and further information, please contact our EU Policy Co-Lead Jimmy Farrell at jimmy.farrell@pourdemain.eu


 
 

Pour Demain

Europe Office

Clockwise

Avenue des Arts ‑ Kunstlaan 44

1040 Brussels

Belgium

Office Bern

Marktgasse 46

3011 Bern

(Mailing address: Mülhauserstrasse 75, 4056 Basel)

Contact

Follow us on:

  • 2
  • 4
bottom of page