Our research

How should we navigate explosive AI progress?

Stay up to date with our research

Enter your email address to subscribe to our newsletter about transformative AI research
Stay up to speed with the latest research on preparing for transformative AI. Roughly weekly.
By subscribing you agree to Substack's terms of service. Unsubscribe anytime. Archives.

Listen

Listen on your favorite platform and never miss an episode.
Spotify
Apple Podcasts
YouTube
Podcast Addict
Pocket Casts
Overcast
RSS
Amazon Music
Pinecast

Research

Filter
(57)

Design sketches: tools for strategic awareness

Owen Cotton-Barratt, Lizka Vaintrob, Oly Sourbut & Rose Hadshar
Part 3 of Design sketches for a more sensible world
Abstract
Near-term AI could be used to power technologies that give individuals and organizations a deeper strategic awareness of the world around them, helping them spot opportunities and avoid pitfalls as they make plans. We think improved strategic awareness could be especially important for empowering humanity to handle the challenges that advanced AI is likely to bring. Here we sketch three technologies that build towards this vision.
Authors
Owen Cotton-Barratt, Lizka Vaintrob, Oly Sourbut & Rose Hadshar
Topic
Macrostrategy

Design sketches for a more sensible world

Series
Owen Cotton-Barratt, Lizka Vaintrob, Oly Sourbut & Rose Hadshar
Abstract
We think that near-term AI systems could transform our ability to reason and coordinate, significantly improving our chances of safely navigating the transition to advanced AI systems. This sequence gives a series of design sketches for specific technologies that we think could help. We hope that these sketches make a more sensible world easier to envision, and inspire people to start building the relevant tech.
Authors
Owen Cotton-Barratt, Lizka Vaintrob, Oly Sourbut & Rose Hadshar
Topic
Modelling AI progress

Design Sketches: Angels-on-the-Shoulder

Owen Cotton-Barratt, Lizka Vaintrob, Oly Sourbut & Rose Hadshar
Part 2 of Design sketches for a more sensible world
Abstract
Near-term AI could allow us to build many technological analogues to ‘angels-on-the-shoulder’: highly customized tools that help people to better navigate their environments or handle tricky situations in ways they’ll feel good about later. These could mean more endorsed decisions, and fewer unforced errors. Here we sketch five technologies that build towards this vision.
Authors
Owen Cotton-Barratt, Lizka Vaintrob, Oly Sourbut & Rose Hadshar
Topic
Macrostrategy

The International AGI Project Series

Series
William MacAskill
Abstract
This is a series of papers and research notes on the idea that AGI should be developed as part of an international collaboration between governments. We aim to (i) assess how desirable an international AGI project is; (ii) assess what the best version of an international AGI project (taking feasibility into account) would look like.
Author
William MacAskill
Topic
International governance

What an international project to develop AGI should look like

William MacAskill
Part 1 of The international AGI project series
Abstract
What would the best version of an international project to develop AGI look like? In this research note, I set out my tentative best guess: “Intelsat for AGI”. This would be a US-led international project modelled on Intelsat (an international project that set up the first global communications satellite network), with broad benefit sharing for non-members. The primary case is that, within the domain of international AGI projects, this looks unusually feasible, and yet it would significantly reduce catastrophic risk compared to a US-only project.
Author
William MacAskill
Topic
International governance

AGI and World Government

William MacAskill & Rose Hadshar
Part 3 of The international AGI project series
Abstract
If there’s a large enough intelligence explosion, the first project to build AGI could organically become a de facto world government. In this note, we consider what implications this possibility has for AGI governance. We argue that this scenario makes it more desirable that AGI be developed by a multilateral coalition of democratic governments, under explicitly interim governance arrangements, and that non-participating countries receive major benefits and credible reassurances around their sovereignty.
Authors
William MacAskill & Rose Hadshar
Topics
International governance & Threat modelling

International AI projects and differential AI development

William MacAskill
Part 4 of The international AGI project series
Abstract
Proposals for an international AI project to manage risks from advanced AI generally require all frontier AI development to happen within that project, and with limitations. But some AI capabilities actively help with addressing risk. I argue that international projects should aim to limit only the most dangerous AI capabilities (in particular, AI R&D capabilities), while promoting helpful capabilities like forecasting and ethical deliberation. If this is technically feasible (which I’m uncertain about), it could increase our capacity to handle risk, reduce incentives to race, and help get industry on board with an international project.
Author
William MacAskill
Topics
International governance & Differential AI acceleration

A global convention to govern the intelligence explosion

William MacAskill
Part 5 of The international AGI project series
Abstract
There currently isn't a plan for how society should navigate an intelligence explosion. This research note proposes an international convention triggered when AI crosses defined capability thresholds. At that point, the US would pause frontier AI development for one month and convene other nations to draft treaties to govern the many challenges an intelligence explosion would throw up. While potentially feasible if agreements can be made quickly enough, it’s unclear if enforcement and technical details would work in practice.
Author
William MacAskill
Topic
International governance

An overview of some international organisations, with their voting structures

Rose Hadshar
Part 6 of The international AGI project series
Abstract
This rough research note gives an overview of some international organisations and their voting structures, as background for thinking about the international governance of AGI.
Author
Rose Hadshar
Topic
International governance

The UN Charter: a case study in international governance

Research note
Part 7 of The international AGI project series
Abstract
The transition to advanced AI systems may eventually lead to some kind of international agreement to govern AI. An important historical case study for an agreement of this kind is the founding of the United Nations. This research note gives an overview of the creation of the UN charter, before drawing some tentative conclusions for international AGI governance.
Topic
International governance

Short Timelines Aren't Obviously Higher-Leverage

William MacAskill & Mia Taylor
Abstract
Should we focus our efforts on worlds with short timelines to AGI? People often argue we should, because such worlds are higher-leverage than longer timelines worlds. We disagree. In this research note, we argue that it’s at least unclear that shorter timelines are higher-leverage, and, for many people, medium length timelines will be higher-leverage than short timelines.
Authors
William MacAskill & Mia Taylor
Topic
Macrostrategy

Is Flourishing Predetermined?

Fin Moorhouse & Carlo Leonardo Attubato
Abstract
Our overall credence in the chance that humanity flourishes might reflect some credence that flourishing is almost impossible, plus some credence that it’s very easy. If so, flourishing would seem overdetermined, and hence less tractable to work on than we thought. We consider how to formalise this argument.
Authors
Fin Moorhouse & Carlo Leonardo Attubato
Topic
Macrostrategy
Show all

Search

Search for articles...