Responding to Serious AI Incidents: Policymaker Briefing
- jimmyfarrell9
- Oct 6
- 3 min read
Updated: Oct 17
This briefing is intended to provide an overview of AI incidents, in the context of the newly finished Code of Practice, a compliance tool under the AI Act for general purpose AI models. It covers a selection of past and ongoing incidents, the next frontier of incidents, and a list of possible actions for policymakers, divided into immediate, medium and long-term time horizons.
Featured in Euractiv: https://www.euractiv.com/news/ai-rules-need-beefing-up-to-ensure-they-catch-crises-thinktank-warns/

Executive Summary
Context
AI capabilities are advancing at unprecedented speed. In just two years, general-purpose AI (GPAI) models have evolved from generating text and images to complex reasoning, multimodal content, and agentic behaviour. These advances bring significant risks: bioweapons design assistance, cybersecurity threats, and harmful outputs without safety guardrails. The AI Act’s Code of Practice aims to mitigate risks through, for example, external assessments and incident reporting, but leaves major gaps:
No pre-deployment safety reporting
Providers set their own risk thresholds
No general purpose AI-specific whistleblower protections
No incident response readiness requirements
Policymakers must be ready to enact rapid updates, not wait for the two-year review cycle.
Why the urgency?
Incidents risk physical harm, loss of life, economic disruption, and importantly, erosion of public trust. Europe lags behind the US and China in AI adoption, and trust is essential for catching up. Historical precedents (e.g., Fukushima in the case of nuclear energy, and the FTX collapse for crypto-assets) show how single crises can derail entire industries. Preventing incidents is thus central to Europe’s competitiveness and credibility as an “AI Continent.”
As of September 2025, serious incidents are already documented, including (significantly more detail can be found in the briefing):
Health & deaths: suicides linked to AI chatbots and AI-induced psychosis
Fundamental rights breaches: non-consensual AI avatars of victims, mass data leaks, discriminatory content
Property/financial harms: deepfake-induced stock market destabilisations, multimillion-dollar fraud, accidental database deletions
Frontier models are now approaching, and in some cases likely reaching, dangerous thresholds in cybersecurity and biological weapons development. Simultaneously, evidence is showing that emerging loss of control risks threaten to horizontally increase the likelihood of serious incidents.
How will policymakers become aware of incidents?
The Code mandates provider reporting to the AI Office, but no public disclosure. Without transparency, policymakers, researchers, and civil society must rely on media or public databases, which often lack root-cause details. This limits learning and prevention.
Incidents may go unreported due to: providers denying involvement, state actors, internal misuse, or “near misses.” Hazards (events that could plausibly lead to incidents) are also not covered.
What can policymakers do in the aftermath of an incident?
Immediate (within 1 month):
Transparent public communication and engagement with affected stakeholders
Coordination with international partners (eg. UK AISI and US CAISI)
Medium-term (2–6 months):
Emergency updates to the Code of Practice
Use of delegated acts to adjust GPAI obligations and thresholds
Input from the Scientific Panel, Advisory Forum, AI Board, and Parliament Working Group on AI Act
Long-term (beyond 6 months):
Expand resources of the AI Office (via the EU budget, MFF, 2028 AI Office evaluation)
Increase funding for frontier AI safety research
Prepare for the 2029 revision of GPAI obligations in the AI Act
AI incidents are no longer hypothetical - they are real, increasing in number, and escalating in severity. The Code of Practice is an important start, but will likely be insufficient. Policymakers must be prepared to act rapidly with updates, oversight, and communication to protect citizens, build trust, and secure Europe’s place in global AI leadership.
Read the full paper: