In an era where General Purpose AI (GPAI) models are becoming increasingly prevalent and powerful, the need for robust incident reporting mechanisms has never been more critical. Pour Demain's latest policy paper (find below) explores policy recommendations for implementing Article 55(1c) of the EU's AI Act, focusing on serious incident reporting obligations for GPAI model providers with systemic risk.
The policy paper outlines a comprehensive framework for GPAI serious incident reporting, addressing the unique challenges posed by these advanced AI systems.
Key recommendations include:
A tiered reporting timeline, balancing urgency with the need for thorough analysis.
A two-stage causality analysis approach, addressing both the link between incidents and model outputs, and the internal model processes leading to harmful outputs.
Mandatory reporting on interconnected systems affected by incidents, recognizing the wide-reaching impact of GPAI models.
Analysis of potential long-term effects and fundamental rights infringements.
Detailed requirements for final incident reports, including identification, incident details, system information, causality analysis, and corrective actions.
Additional features such as feedback loops with other GPAI obligations, secure reporting environments, and third-party audits.
This framework aims to strike a balance between comprehensive reporting and avoiding undue burden on model providers, while ensuring that lessons from incidents contribute to ongoing improvements in AI safety and governance.
Read the full paper:
For questions, please reach out to our EU AI policy lead Jimmy Farrell at jimmy.farrell@pourdemain.eu.