Here is a selection of some of my recent publications.

Looking ahead: Synergies between the EU AI Office and UK AISI
Renan Araujo Renan Araujo

Looking ahead: Synergies between the EU AI Office and UK AISI

Looking ahead: Synergies between the EU AI Office and UK AISI

The UK AI Security Institute (AISI) and the European AI Office are the primary bodies covering security and safety of AI systems in their respective jurisdictions. The institutions have overlapping mandates and share various functions. Using a framework of four levels of engagement – collaboration, coordination, communication and separation – this brief provides an overview of potential synergies and strategic alignment, summarized in a table of ideas (Table 1). This framework can further provide a model for other regional strategic arrangements within the broader network of AISIs.

Read More
Who Should Develop Which AI Evaluations?
Renan Araujo Renan Araujo

Who Should Develop Which AI Evaluations?

Who Should Develop Which AI Evaluations?

We explore frameworks and criteria for determining which actors (e.g., government agencies, AI companies, third-party organisations) are best suited to develop AI model evaluations. Key challenges include conflicts of interest when AI companies assess their own models, the information and skill requirements for AI evaluations and the (sometimes) blurred boundary between developing and conducting evaluations. We propose a taxonomy of four development approaches: government-led development, government-contractor collaborations, third-party development via grants, and direct AI company development.

Read More
Key Questions for the International Network of AI Safety Institutes
Renan Araujo Renan Araujo

Key Questions for the International Network of AI Safety Institutes

Key questions for the International Network of AI Safety Institutes

In this commentary, we explore key questions for the International Network of AI Safety Institutes and suggest ways forward given the upcoming San Francisco convening on November 20-21, 2024. What should the network work on? How should it be structured in terms of membership and central coordination? How should it fit into the international governance landscape?

Read More
Understanding the First Wave of AI Safety Institutes: Characteristics, Functions, and Challenges
Renan Araujo Renan Araujo

Understanding the First Wave of AI Safety Institutes: Characteristics, Functions, and Challenges

Understanding the First Wave of AI Safety Institutes

AI Safety Institutes (AISIs) are a new institutional model for AI governance that has expanded across the globe. In this primer, we analyze the “first wave” of AISIs: the shared fundamental characteristics and functions of the institutions established by the UK, the US, and Japan that are governmental, technical, with a clear mandate to govern the safety of advanced AI systems.

Read More
The Future of International Scientific Assessments of AI’s Risks
Renan Araujo Renan Araujo

The Future of International Scientific Assessments of AI’s Risks

The Future of International Scientific Assessments of AI’s Risks

Effective international coordination to address AI’s global impacts demands a shared, scientifically rigorous understanding of AI risks. This paper examines the challenges and opportunities in establishing international scientific consensus in this domain. It analyzes current efforts, including the UK-led International Scientific Report on the Safety of Advanced AI and emerging UN initiatives, identifying key limitations and tradeoffs. The authors propose a two-track approach: 1) a UN-led process focusing on broad AI issues and engaging member states, and 2) an independent annual report specifically focused on advanced AI risks. The paper recommends careful coordination between these efforts to leverage their respective strengths while maintaining their independence. It also evaluates potential hosts for the independent report, including the network of AI Safety Institutes, the OECD, and scientific organizations like the International Science Council. The proposed framework aims to balance scientific rigor, political legitimacy, and timely action to facilitate coordinated international action on AI risks.

Read More