top of page
mehr als chatgpt (2).webp

Artificial Intelligence (AI)

Untitled design (78)_edited.png

The rapid advancement of AI presents various opportunities, but also significant risks.

Deepfakes and the proliferation of misinformation could severely erode public trust and disrupt democratic processes. Additionally, failures in AI systems deployed in critical infrastructure, such as energy grids, could have serious consequences.

AI safety, therefore, must be a key priority. Smart oversight and standards are crucial to harnessing the opportunities while also managing the risks of AI.

Our focus areas on AI safety include:

Experts Panel

Propose smart regulatory frameworks for advanced AI

A regulatory framework for AI is vital to harmonise innovation with effective risk management. By establishing reasonable guidelines, such a framework promotes a safe development environment, especially for cutting-edge models. A key aspect involves mandating third-party evaluations of advanced AI models, ensuring comprehensive testing prior to market introduction. Unlike for technical appliances like toasters, AI regulations in Europe and globally are still in their infancy. A robust regulatory framework reduces uncertainty, boosts investment, and guarantees that AI technologies contribute positively to society, while proactively mitigating potential adverse effects.

Untitled design (78)_edited.png

Focus Areas


Shape industry standards

Complementing legal frameworks, the development of detailed industry standards, such as those by ISO and CEN/CENELEC, plays a crucial role in ensuring AI's safe and ethical deployment. These standards provide a bottom-up approach to governance, offering specific technical guidelines that can adapt more swiftly to technological advancements than legislation. Pour Demain actively participates in these technical discussions, contributing expertise to shape standards that allow for safe and ethical innovation.

European Union Flag

Contribute to international AI governance

International governance of AI is pivotal, with initiatives like the AI Convention of the Council of Europe, policy recommendations by the G7 and OECD, playing significant roles in shaping global standards. These efforts are complemented by ongoing work in follow-up summits to the UK Safety Summit of 2023, where key stakeholders convene to advance AI safety and governance frameworks. Pour Demain actively contributes to these international efforts, leveraging its expertise to influence policy recommendations and standards.

Taking Notes

Strengthen research on AI safety

AI systems, especially deep learning models, are often opaque, complicating the understanding of their decision-making processes. This opacity is problematic in critical areas like healthcare and justice. Therefore, enhancing AI interpretability through research to make AI decisions transparent, accountable, and reliable is essential. Such efforts are key to ensuring AI's ethical use and societal trust. Pour Demain advocates for both international and national research initiatives focused on AI interpretability.

Green Energy Turbines

Monitor AI use in critical infrastructure

Pour Demain is surveying the use of advanced AI in critical sectors like health and energy to inform policymakers (e.g., 2023 survey). This effort aims to map AI's adoption, guiding necessary policy actions for its safe deployment. Such data-driven insights will help ensure AI enhances these sectors responsibly, aligning technological progress with societal well-being.

Untitled design (78)_edited.png
Abstract Background

Host AI Safety Prize

Pour Demain has organised an AI Safety Prize to highlight AI safety concerns to policymakers and business leaders. This competition aims to uncover safety loopholes, raising awareness and prompting action on AI safety. It serves as a platform for driving safer AI development, influencing policy and industry practices to prioritise safety and ethics in AI advancements.

Latest blog posts on AI

​Stay up to date with our work.

Subscribe to our newsletter

Thank you for subscribing!

bottom of page