How should we navigate explosive AI progress?

AI is already accelerating innovation, and may soon become as capable as human scientists.
If that happens, many new technologies could arise in quick succession: new miracle drugs and new bioweapons; automated companies and automated militaries; superhuman prediction and superhuman persuasion.
We are a nonprofit researching what can we do, now, to prepare.

Featured Research

Preparing for the Intelligence Explosion

Fin Moorhouse, Will MacAskill
March 2025
AI that can accelerate research could drive a century of technological progress over just a few years. During such a period, new technological or political developments will raise consequential and hard-to-reverse decisions, in rapid succession. We call these developments grand challenges.
These challenges include new weapons of mass destruction, AI-enabled autocracies, races to grab offworld resources, and digital beings worthy of moral consideration, as well as opportunities to dramatically improve quality of life and collective decision-making.
We argue that these challenges cannot always be delegated to future AI systems, and suggest things we can do today to meaningfully improve our prospects. AGI preparedness is therefore not just about ensuring that advanced AI systems are aligned: we should be preparing, now, for the disorienting range of developments an intelligence explosion would bring.

Should there be just one western AGI project?

Tom Davidson, Rose Hadshar
December 2024
There have been recent discussions of centralizing western AGI development, for instance through a Manhattan Project for AI. But there has been little analysis of whether centralizing would actually be a good idea. In this piece, we explore the strategic implications of having one project instead of several. We think that it’s very unclear whether centralizing would be good or bad overall. We tentatively guess that centralizing would be bad because it would increase risks from power concentration. We argue that future work should focus on increasing the expected quality of either a centralized or multiple projects, rather than increasing the likelihood of a centralized project.

On the Value of Advancing Progress

Toby Ord
July 2024
I show how a standard argument for advancing progress is extremely sensitive to how humanity’s story eventually ends. Whether advancing progress is ultimately good or bad depends crucially on whether it also advances the end of humanity. Because we know so little about the answer to this crucial question, the case for advancing progress is undermined. I suggest we must either overcome this objection through improving our understanding of these connections between progress and human extinction or switch our focus to advancing certain kinds of progress relative to others — changing where we are going, rather than just how soon we get there.

Inference Scaling Reshapes AI Governance

Toby Ord
February 2025
The shift from scaling up the pre-training compute of AI systems to scaling up their inference compute may have profound effects on AI governance. The nature of these effects depends crucially on whether this new inference compute will primarily be used during external deployment or as part of a more complex training programme within the lab. Rapid scaling of inference-at-deployment would: lower the importance of open-weight models (and of securing the weights of closed models), reduce the impact of the first human-level models, change the business model for frontier AI, reduce the need for power-intense data centres, and derail the current paradigm of AI governance via training compute thresholds. Rapid scaling of inference-during-training would have more ambiguous effects that range from a revitalisation of pre-training scaling to a form of recursive self-improvement via iterated distillation and amplification.
Research

Team

Tom Davidson

Tom Davidson

Senior Research Fellow
Read Bio
Will MacAskill

Will MacAskill

Senior Research Fellow
Read Bio
Rose Hadshar

Rose Hadshar

Research Fellow
Read Bio
Meet the full team

About

We are a small research nonprofit focused on how to navigate the transition to a world with superintelligent AI systems.
AI systems might soon be much more capable than humans, quickly leading to rapid technological progress. Even if AI systems were aligned, we might face AI-enabled autocracies, races to grab offworld resources, and conundrums about how to treat digital minds, as well as opportunities to dramatically improve quality of life and collective decision-making.
Right now, we’re thinking about:
  • How AI and technological development might go
  • How to avoid AI-enabled coups
  • Which applications of AI can most help with other challenges
  • What a great post-AGI future might look like.
Research
Sign up to our newsletter