top of page

The Code of Practice: Pour Demain’s contributions to EU guidelines for general-purpose AI models

  • davidmarti27
  • 26 juin
  • 3 min de lecture

Dernière mise à jour : il y a 5 jours


Illustration generated by Gamma
Illustration generated by Gamma


The final draft of the EU’s Code of Practice for general-purpose AI (GPAI) models is expected to be published in July. This will conclude an intense eleven-month drafting process, to which Pour Demain has contributed from the beginning. During the three drafting rounds, we submitted comprehensive feedback totalling over 230 pages of survey responses, including four supporting documents. We have also been in regular exchange with other interested parties throughout the process.


The following are the points Pour Demain emphasised during the three drafting rounds in November 2024, and in January and March 2025.


Mandatory external assessments 

External assessments provide crucial independent oversight of GPAI models. Pour Demain advocated for clear criteria that prevent providers from avoiding external scrutiny. Key recommendations include requiring pre-deployment external assessment of the exact model version intended for market release and extending minimum consultation periods to 35 days to allow thorough evaluation. We also pushed for stronger independence requirements for assessors, including provisions that prevent conflicts of interest, similar to requirements for external assessment in the Digital Services Act (DSA). Most importantly, providers should be required to justify why the access, information, and resources granted to external assessors are sufficient for effective assessment—addressing current gaps where minimal access (for example without access to a model’s ‘chain-of-thought’) greatly reduces the risk-mitigation potential of evaluations. For further information on how deeper model access must be granted as technical AI safety and security methods develop, see our paper Securing External Deeper-than-black-box GPAI Evaluations by Jimmy Farrell and technical GPAI researcher Alejandro Tlaie.


Serious incident reporting

While the third draft significantly improved incident reporting with tiered timelines, critical gaps remain that could undermine effectiveness. Pour Demain emphasised making incident identification methods mandatory and additive rather than optional, removing vague language and introducing 48-hour initial notifications to ensure authorities learn of incidents from official sources rather than from news reports. We advocated for stronger root cause analysis using state-of-the-art white-box techniques, explicit coverage of AI agent-related incidents, and aligning final report timelines with existing cybersecurity standards (30 days, not 60). The timeline should start from incident notification, not resolution, to prevent indefinite delays in regulatory oversight. For further information, see our policy brief on serious incident reporting.


Review and monitoring

The Code's success depends on robust mechanisms for continuous improvement and adaptation. Pour Demain supported establishing a Multi-stakeholder Standing Taskforce under the AI Act’s Advisory Forum to facilitate ongoing monitoring and emergency updates, complemented by a Transparency Centre serving as a public repository for safety and security frameworks and model reports. These recommendations, drawing inspiration from the DSA’s Code of Practice on Disinformation, are important to ensure GPAI governance remains flexible with democratic oversight. We recommended yearly adequacy assessments with three-month grace periods for compliance, plus emergency update procedures within one month for serious incidents or significant technological developments. This multi-stakeholder approach reduces regulatory burden while enabling civil society, academia, and industry experts to contribute meaningfully to oversight—creating a self-reinforcing system that adapts to the rapid pace of AI development.


Additional points we emphasised include, but are not limited to: 

  • Governing AI agents: we highlighted that AI agents represent a different risk category requiring specialised safeguards – including identity frameworks, delegated authorisation systems, hardened sandboxing, and real-time monitoring with emergency-shutdown capabilities. See our former fellow Afek Shamir’s article on governing AI agents for further details. 

  • Mandatory advance notification periods of 8-12 weeks before model deployment (depending on release type) to close loopholes that allow providers to avoid regulatory oversight and enable structured dialogue that prevents costly model withdrawals and fines.


For further detail, see our supporting documents for the three submission rounds:

  • Supporting document Code First Draft, focusing on model access for third-party evaluations, and serious incident reporting

  • Supporting document Code Second Draft, co-submitted with The Future Society, focusing on external, secure and deeper-than-black-box evaluations, and on using ‘model similarity’ as a way to avoid safety assessments 

  • Supporting document Code Third Draft, on special attention required for AI agents, and on an updating mechanism for the Code 

  • Additional document submitted for the Third Draft, on risk thresholds, co-submitted with The Future Society and the Future of Life Institute


For questions and further information, please contact our EU Policy Co-Lead Jimmy Farrell at jimmy.farrell@pourdemain.eu

Pour Demain

Bureau européen

Clockwise

Avenue des Arts 44

1040 Bruxelles

Belgique

Bureau Berne

Marktgasse 46

3011 Berne
(Adresse postale: Mülhauserstrasse 75, 4056 Bâle)

Contact

Suivez-nous sur :

  • 2
  • 4
bottom of page