France's AI Action Summit: Key recommendations

I will be in Paris between Feb 3-13 for events around the AI Action Summit. I’d love to meet others who are in town to chat about AI policy.

What success looks like for the Summit?

As excellently articulated by GovAI researchers Claire Dennis, Ben Clifford, Markus Anderljung, and Robert Trager:

  1. Ensure the Summit Series’ future: it’s still uncertain which country will host the next AI Summit and when. The Summits have been an essential part of the international conversation on securing advanced AI, contributing to inter-government dialogues (e.g., laying the foundation for the international network of AI Safety Institutes) and company commitments on risk management (e.g., safety frameworks). I see ensuring the Summits’ continuation in the next 6-9 months as the most important outcome from Paris.

  2. Secure senior US and Chinese government engagement: the US has been a global bellwether on securing AI systems, so having senior representation from their AISI or other key agency would be essential. Chinese representation would also be absolutely important, especially after China not participating in the Seoul Declaration – the Summit is one of the few spaces where both US and China AI representatives participate in building a global dialogue on the safety of advanced AI systems.

  3. Demonstrate continued progress on safety commitments, especially from companies: previous Summits saw important updates from companies on the measures they committed to take to ensure the safety of their systems. Having more updates from frontier AI labs on such commitments would be great progress.

Side events

Some events I’m excited to attend:


🟧 IASEAI (International Association for Safe and Ethical AI)’s conference, where I’ll share IAPS’s work on AI Safety Institutes.

🔐 AI Security Forum: excited to chat with experts on AI model weight security, cyber capability evaluations, and threat models tracking.

🌐 AI Safety Connect: keen to discuss coordination among AI Safety Institutes, international cooperation, and see the launch of the Global Risk and AI Safety Preparedness project.

You can see more side events here: https://www.elysee.fr/en/sommet-pour-l-action-sur-l-ia/side-events-at-the-summit 

Next
Next

Open roles in AI policy #9