Podcast

Newsletter

How should we navigate explosive AI progress?

AI is already accelerating innovation, and may soon become as capable as human scientists.
If that happens, many new technologies could arise in quick succession: new miracle drugs and new bioweapons; automated companies and automated militaries; superhuman prediction and superhuman persuasion.
We are a nonprofit researching what can we do, now, to prepare.

Featured Research

AI-Enabled Coups: How a Small Group Could Use AI to Seize Power

Tom Davidson, Lukas Finnveden, Rose Hadshar
April 2025
The development of AI that is more broadly capable than humans will create a new and serious threat: AI-enabled coups. An AI-enabled coup could be staged by a very small group, or just a single person, and could occur even in established democracies. Sufficiently advanced AI will introduce three novel dynamics that significantly increase coup risk. Firstly, military and government leaders could fully replace human personnel with AI systems that are singularly loyal to them, eliminating the need to gain human supporters for a coup. Secondly, leaders of AI projects could deliberately build AI systems that are secretly loyal to them, for example fully autonomous military robots that pass security tests but later execute a coup when deployed in military settings. Thirdly, senior officials within AI projects or the government could gain exclusive access to superhuman capabilities in weapons development, strategic planning, persuasion, and cyber offense, and use these to increase their power until they can stage a coup. To address these risks, AI projects should design and enforce rules against AI misuse, audit systems for secret loyalties, and share frontier AI systems with multiple stakeholders. Governments should establish principles for government use of advanced AI, increase oversight of frontier AI projects, and procure AI for critical systems from multiple independent providers.

Preparing for the Intelligence Explosion

Will MacAskill, Fin Moorhouse
March 2025
AI that can accelerate research could drive a century of technological progress over just a few years. During such a period, new technological or political developments will raise consequential and hard-to-reverse decisions, in rapid succession. We call these developments grand challenges.
These challenges include new weapons of mass destruction, AI-enabled autocracies, races to grab offworld resources, and digital beings worthy of moral consideration, as well as opportunities to dramatically improve quality of life and collective decision-making.
We argue that these challenges cannot always be delegated to future AI systems, and suggest things we can do today to meaningfully improve our prospects. AGI preparedness is therefore not just about ensuring that advanced AI systems are aligned: we should be preparing, now, for the disorienting range of developments an intelligence explosion would bring.

Will AI R&D Automation Cause a Software Intelligence Explosion?

Daniel Eth, Tom Davidson
March 2025
AI companies are increasingly using AI systems to accelerate AI research and development. Today’s AI systems help researchers write code, analyze research papers, and generate training data. Future systems could be significantly more capable – potentially automating the entire AI development cycle from formulating research questions and designing experiments to implementing, testing, and refining new AI systems. We argue that such systems could trigger a runaway feedback loop in which they quickly develop more advanced AI, which itself speeds up the development of even more advanced AI, resulting in extremely fast AI progress, even without the need for additional computer chips. Empirical evidence on the rate at which AI research efforts improve AI algorithms suggests that this positive feedback loop could overcome diminishing returns to continued AI research efforts. We evaluate two additional bottlenecks to rapid progress: training AI systems from scratch takes months, and improving AI algorithms often requires computationally expensive experiments. However, we find that there are possible workarounds that could enable a runaway feedback loop nonetheless.

AI Tools for Existential Security

Lizka Vaintrob, Owen Cotton-Barratt
March 2025
Humanity is not prepared for the AI-driven challenges we face. But the right AI tools could help us to anticipate and work together to meet these challenges — if they’re available in time. We can and should accelerate these tools.
Key applications include (1) epistemic tools, which improve human judgement; (2) coordination tools, which help diverse groups work identify and work towards shared goals; (3) risk-targeted tools to address specific challenges.
We can accelerate important tools by investing in task-relevant data, lowering adoption barriers, and securing compute for key R&D. While background AI progress limits potential gains, even small speedups could be decisive.
This is a priority area. There is lots to do already, and there will quickly be more. We should get started, and we should plan for a world with abundant cognition.

Intelsat as a Model for International AGI Governance

Will MacAskill, Rose Hadshar
March 2025
If there is an international project to build artificial general intelligence (“AGI”), how should it be designed? Existing scholarship has looked to historical models for inspiration, often suggesting the Manhattan Project or CERN as the closest analogues. But AGI is a fundamentally general-purpose technology, and is likely to be used primarily for commercial purposes rather than military or scientific ones.
This report presents an under-discussed alternative: Intelsat, an international organization founded to establish and own the global satellite communications system. We show that Intelsat is proof of concept that a multilateral project to build a commercially and strategically important technology is possible and can achieve intended objectives—providing major benefits to both the US and its allies compared to the US acting alone. We conclude that ‘Intelsat for AGI’ is a valuable complement to existing models of AGI governance.
More research

Key questions

What are the new technologies and challenges that AI could unlock? Which will come first?
What can companies and governments do to avoid extreme power concentrations?
Which beneficial applications of AI should be accelerated? How can we do that?
How do we reach really good futures (rather than “just” avoiding catastrophe)?

Team

Tom Davidson

Tom Davidson

Senior Research Fellow
Read Bio
Will MacAskill

Will MacAskill

Senior Research Fellow
Read Bio
Rose Hadshar

Rose Hadshar

Research Fellow
Read Bio
Meet the full team

Listen

Latest episodes
Subscribe on your podcast player

About

We are a small research nonprofit focused on how to navigate the transition to a world with superintelligent AI systems.
AI systems might soon be much more capable than humans, quickly leading to rapid technological progress. Even if AI systems were aligned, we might face AI-enabled autocracies, races to grab offworld resources, and conundrums about how to treat digital minds, as well as opportunities to dramatically improve quality of life and collective decision-making.
Right now, we are thinking about:
  • How AI and technological development might go
  • How to avoid AI-enabled coups
  • Which applications of AI can most help with other challenges
  • What a great post-AGI future might look like.
Research
Sign up to our newsletter