Our research

How should we navigate explosive AI progress?

Stay up to date with our research

Enter your email address to subscribe to our newsletter about transformative AI research
Stay up to speed with the latest research on preparing for transformative AI. Roughly weekly.
By subscribing you agree to Substack's terms of service. Unsubscribe anytime. Archives.

Listen

Listen on your favorite platform and never miss an episode.
Spotify
Apple Podcasts
YouTube
Podcast Addict
Pocket Casts
Overcast
RSS
Amazon Music
Pinecast

Research

Filter
(54)

The International AGI Project Series

Series
William MacAskill
Abstract
This is a series of papers and research notes on the idea that AGI should be developed as part of an international collaboration between governments. We aim to (i) assess how desirable an international AGI project is; (ii) assess what the best version of an international AGI project (taking feasibility into account) would look like.
Author
William MacAskill
Topic
International governance

What an international project to develop AGI should look like

William MacAskill
Part 1 of The international AGI project series
Abstract
What would the best version of an international project to develop AGI look like? In this research note, I set out my tentative best guess: “Intelsat for AGI”. This would be a US-led international project modelled on Intelsat (an international project that set up the first global communications satellite network), with broad benefit sharing for non-members. The primary case is that, within the domain of international AGI projects, this looks unusually feasible, and yet it would significantly reduce catastrophic risk compared to a US-only project.
Author
William MacAskill
Topic
International governance

AGI and World Government

William MacAskill & Rose Hadshar
Part 3 of The international AGI project series
Abstract
If there’s a large enough intelligence explosion, the first project to build AGI could organically become a de facto world government. In this note, we consider what implications this possibility has for AGI governance. We argue that this scenario makes it more desirable that AGI be developed by a multilateral coalition of democratic governments, under explicitly interim governance arrangements, and that non-participating countries receive major benefits and credible reassurances around their sovereignty.
Authors
William MacAskill & Rose Hadshar
Topics
International governance & Threat modelling

International AI projects and differential AI development

William MacAskill
Part 4 of The international AGI project series
Abstract
Proposals for an international AI project to manage risks from advanced AI generally require all frontier AI development to happen within that project, and with limitations. But some AI capabilities actively help with addressing risk. I argue that international projects should aim to limit only the most dangerous AI capabilities (in particular, AI R&D capabilities), while promoting helpful capabilities like forecasting and ethical deliberation. If this is technically feasible (which I’m uncertain about), it could increase our capacity to handle risk, reduce incentives to race, and help get industry on board with an international project.
Author
William MacAskill
Topics
International governance & Differential AI acceleration

A global convention to govern the intelligence explosion

William MacAskill
Part 5 of The international AGI project series
Abstract
There currently isn't a plan for how society should navigate an intelligence explosion. This research note proposes an international convention triggered when AI crosses defined capability thresholds. At that point, the US would pause frontier AI development for one month and convene other nations to draft treaties to govern the many challenges an intelligence explosion would throw up. While potentially feasible if agreements can be made quickly enough, it’s unclear if enforcement and technical details would work in practice.
Author
William MacAskill
Topic
International governance

An overview of some international organisations, with their voting structures

Rose Hadshar
Part 6 of The international AGI project series
Abstract
This rough research note gives an overview of some international organisations and their voting structures, as background for thinking about the international governance of AGI.
Author
Rose Hadshar
Topic
International governance

The UN Charter: a case study in international governance

Research note
Part 7 of The international AGI project series
Abstract
The transition to advanced AI systems may eventually lead to some kind of international agreement to govern AI. An important historical case study for an agreement of this kind is the founding of the United Nations. This research note gives an overview of the creation of the UN charter, before drawing some tentative conclusions for international AGI governance.
Topic
International governance

Short Timelines Aren't Obviously Higher-Leverage

William MacAskill & Mia Taylor
Abstract
Should we focus our efforts on worlds with short timelines to AGI? People often argue we should, because such worlds are higher-leverage than longer timelines worlds. We disagree. In this research note, we argue that it’s at least unclear that shorter timelines are higher-leverage, and, for many people, medium length timelines will be higher-leverage than short timelines.
Authors
William MacAskill & Mia Taylor
Topic
Macrostrategy

Is Flourishing Predetermined?

Fin Moorhouse & Carlo Leonardo Attubato
Abstract
Our overall credence in the chance that humanity flourishes might reflect some credence that flourishing is almost impossible, plus some credence that it’s very easy. If so, flourishing would seem overdetermined, and hence less tractable to work on than we thought. We consider how to formalise this argument.
Authors
Fin Moorhouse & Carlo Leonardo Attubato
Topic
Macrostrategy

Beyond Existential Risk

William MacAskill & Guive Assadi
Abstract
Bostrom's Maxipok principle suggests reducing existential risk should be the overwhelming focus for improving humanity’s long-term prospects. But this rests on an implicitly dichotomous view of future value, where most outcomes are either near-worthless or near-best. Against Maxipok, we argue it is possible to substantially influence the long-term future by other channels than reducing existential risk — including how values, institutions, and power distributions become locked in.
Authors
William MacAskill & Guive Assadi
Topic
Macrostrategy

ML research directions for preventing catastrophic data poisoning

Tom Davidson
Abstract
Previously, Forethought and others have highlighted the risk that a malicious actor could use data poisoning to instill secret loyalties into advanced AI systems, which then help the malicious actor to seize power.
This piece gives my view on what ML research could prevent this from happening.
Author
Tom Davidson
Topic
Threat modelling

Viatopia

William MacAskill
Abstract
What kind of outcome should we be aiming for after the transition to superintelligence? History shows we should be suspicious of any particular utopian vision. But protopianism—just solving problems as they arise—also falls short. I make the case for ‘viatopia’: an intermediate state of society that is on track for a near-best future, whatever that might look like.
Author
William MacAskill
Topic
Macrostrategy
Show all

Search

Search for articles...