Our research

How should we navigate explosive AI progress?

Stay up to date with our research

Enter your email address to subscribe to our newsletter about transformative AI research
Stay up to speed with the latest research on preparing for transformative AI. Roughly weekly.
By subscribing you agree to Substack's terms of service. Unsubscribe anytime. Archives.

Listen

Listen on your favorite platform and never miss an episode.
Spotify
Apple Podcasts
YouTube
Podcast Addict
Pocket Casts
Overcast
RSS
Amazon Music
Pinecast

Research

Filter
(42)

Design Sketches: Collective Epistemics

Owen Cotton-Barratt, Lizka Vaintrob & Oly Sourbut
Abstract
Near-term AI could power applications that correct misinformation close to the source, let people track who or what has been reliable in the past, and identify manipulative rhetoric. Getting this right could help to incentivize honest behaviour across society — increasing our collective capacity to address new challenges competently. Here we sketch five technologies that build towards this vision.
Authors
Owen Cotton-Barratt, Lizka Vaintrob & Oly Sourbut
Topic
Differential AI acceleration

The first type of transformative AI?

Owen Cotton-Barratt, Lizka Vaintrob & Oly Sourbut
Abstract
AI will transform the world in many ways. It isn’t enough to pay attention only to the biggest transformations — earlier transformations might reshape the landscape, and this has big implications for how we should prepare. Some of the best interventions available to us might be trying to improve how these early transformations play out. Understanding the sequencing seems at least as decision-relevant as understanding timelines, but so far has had much less attention. Addressing that should be a priority.
Authors
Owen Cotton-Barratt, Lizka Vaintrob & Oly Sourbut
Topics
Modelling AI progress, Differential AI acceleration & Macrostrategy

Space Debris and Launch Denial

Fin Moorhouse
Abstract
We could see runaway growth in the amount of debris in orbit — so-called 'Kessler syndrome'. That could happen accidentally, but how much worse could things get with deliberate attempts to generate space debris? Could the resulting debris even block launches through orbit, potentially letting a first-mover lock others out of space? I find that even deliberate attempts would struggle to block launches, though it could cheaply make preferential orbits inhospitable. Shielding and the brevity of transit time are the key factors.
Author
Fin Moorhouse
Topic
Macrostrategy

Evaluating the Infinite

Toby Ord
Abstract
I present a novel mathematical technique for dealing with the infinities arising from divergent sums and integrals. It assigns them fine-grained infinite values from the set of hyperreal numbers in a manner that refines the standard theories of summation and integration. This has implications in statistics (helping us work with distributions whose mean or variance is infinite), decision theory (allowing comparison of options with infinite expected values), economics (allowing evaluation of infinitely long streams of utility without discounting), and ethics (allowing evaluation of infinite worlds). There are even implications for finite cases, as the ability to handle these infinities undermines a common argument for bounded utility and the discounting of future utility.
Author
Toby Ord
Topic
Macrostrategy

Could one country outgrow the rest of the world?

Tom Davidson
Abstract
When countries grow at the same exponential rate, they maintain their relative sizes. But after we develop AGI, there may be a period of superexponential growth, with growth becoming faster and faster over time. If this superexponential growth lasts for long enough, the leader could pull further and further ahead of the others, eventually producing >99% of global output, and outgrowing the rest of the world combined. This post gives a basic economic analysis of this dynamic and argues that the leading country in AI development could outgrow the world, but only if it was trying hard to do so.
Author
Tom Davidson
Topic
Threat modelling

How quick and big would a software intelligence explosion be?

Tom Davidson & Tom Houlden
Abstract
In previous work, we’ve argued that AI that can automate AI R&D could lead to a software intelligence explosion. But just how dramatic would this actually be? In this paper, we model how much AI progress we’ll see before a software intelligence explosion fizzles out. Averaged over one year, we find that AI progress could easily be 3X faster, might be 10X faster, but won’t be 30X faster - because at that speed we’d quickly hit limits on how good software can get.
Authors
Tom Davidson & Tom Houlden
Topic
Modelling AI progress

Better Futures

Series
William MacAskill
Abstract
Suppose we want the future to go better. What should we do?
One approach is to avoid near-term catastrophes, like human extinction. This essay series explores a different, complementary, approach: improving on futures where we survive, to achieve a truly great future.
Author
William MacAskill
Topic
Macrostrategy

Introducing Better Futures

William MacAskill
Part 1 of Better Futures
Abstract
Suppose we want the future to go better. What should we do?
One approach is to avoid near-term catastrophes, like human extinction. This essay series explores a different, complementary, approach: improving on futures where we survive, to achieve a truly great future.
Author
William MacAskill
Topic
Macrostrategy

No Easy Eutopia

Fin Moorhouse & William MacAskill
Part 2 of Better Futures
Abstract
How big is the target we need to hit to reach a mostly great future? We argue that, on most plausible views, only a narrow range of futures meet this bar, and even common-sense utopias miss out on almost all their potential.
Authors
Fin Moorhouse & William MacAskill
Topic
Macrostrategy

Convergence and Compromise

Fin Moorhouse & William MacAskill
Part 3 of Better Futures
Abstract
Even if the target is narrow, will there be forces which nonetheless hone in on near-best futures? We argue society is unlikely to converge on them by default. Trade and compromise make eutopias seem more achievable, but still we should expect ‘default’ outcomes to fall far short.
Authors
Fin Moorhouse & William MacAskill
Topic
Macrostrategy

How to Make the Future Better

William MacAskill
Part 5 of Better Futures
Abstract
I suggest a number of concrete actions we can take now to make the future go better.
Author
William MacAskill
Topic
Macrostrategy

Persistent Path-Dependence

William MacAskill
Part 4 of Better Futures
Abstract
Over sufficiently long time horizons, will the effects of actions to improve the quality of the future just ‘wash out’? Against this view, I argue a number of plausible near-term events will have persistent and predictable path-dependent effects on the value of the future.
Author
William MacAskill
Topic
Macrostrategy
Show all

Search

Search for articles...