Our research
How should we navigate explosive AI progress?
Featured
Preparing for the Intelligence Explosion
Will MacAskill, Fin Moorhouse
March 2025
AI that can accelerate research could drive a century of technological progress over just a few years. During such a period, new technological or political developments will raise consequential and hard-to-reverse decisions, in rapid succession. We call these developments grand challenges.
These challenges include new weapons of mass destruction, AI-enabled autocracies, races to grab offworld resources, and digital beings worthy of moral consideration, as well as opportunities to dramatically improve quality of life and collective decision-making.
We argue that these challenges cannot always be delegated to future AI systems, and suggest things we can do today to meaningfully improve our prospects. AGI preparedness is therefore not just about ensuring that advanced AI systems are aligned: we should be preparing, now, for the disorienting range of developments an intelligence explosion would bring.
Will AI R&D Automation Cause a Software Intelligence Explosion?
Daniel Eth, Tom Davidson
March 2025
AI companies are increasingly using AI systems to accelerate AI research and development. Today’s AI systems help researchers write code, analyze research papers, and generate training data. Future systems could be significantly more capable – potentially automating the entire AI development cycle from formulating research questions and designing experiments to implementing, testing, and refining new AI systems. We argue that such systems could trigger a runaway feedback loop in which they quickly develop more advanced AI, which itself speeds up the development of even more advanced AI, resulting in extremely fast AI progress, even without the need for additional computer chips. Empirical evidence on the rate at which AI research efforts improve AI algorithms suggests that this positive feedback loop could overcome diminishing returns to continued AI research efforts. We evaluate two additional bottlenecks to rapid progress: training AI systems from scratch takes months, and improving AI algorithms often requires computationally expensive experiments. However, we find that there are possible workarounds that could enable a runaway feedback loop nonetheless.
Three Types of Intelligence Explosion
Tom Davidson, Rose Hadshar, Will MacAskill
March 2025
Once AI systems can design and build even more capable AI systems, we could see an intelligence explosion, where AI capabilities rapidly increase to well past human performance.
The classic intelligence explosion scenario involves a feedback loop where AI improves AI software. But AI could also improve other inputs to AI development. This paper analyses three feedback loops in AI development: software, chip technology, and chip production. These could drive three types of intelligence explosion: a software intelligence explosion driven by software improvements alone; an AI-technology intelligence explosion driven by both software and chip technology improvements; and a full-stack intelligence explosion incorporating all three feedback loops.
Even if a software intelligence explosion never materializes or plateaus quickly, AI-technology and full-stack intelligence explosions remain possible. And, while these would start more gradually, they could accelerate to very fast rates of development. Our analysis suggests that each feedback loop by itself could drive accelerating AI progress, with effective compute potentially increasing by 20-30 orders of magnitude before hitting physical limits—enabling truly dramatic improvements in AI capabilities. The type of intelligence explosion also has implications for the distribution of power: a software intelligence explosion would by default concentrate power within one country or company, while a full-stack intelligence explosion would be spread across many countries and industries.
Intelsat as a Model for International AGI Governance
Will MacAskill, Rose Hadshar
March 2025
If there is an international project to build artificial general intelligence (“AGI”), how should it be designed? Existing scholarship has looked to historical models for inspiration, often suggesting the Manhattan Project or CERN as the closest analogues. But AGI is a fundamentally general-purpose technology, and is likely to be used primarily for commercial purposes rather than military or scientific ones.
This report presents an under-discussed alternative: Intelsat, an international organization founded to establish and own the global satellite communications system. We show that Intelsat is proof of concept that a multilateral project to build a commercially and strategically important technology is possible and can achieve intended objectives—providing major benefits to both the US and its allies compared to the US acting alone. We conclude that ‘Intelsat for AGI’ is a valuable complement to existing models of AGI governance.
AI Tools for Existential Security
Lizka Vaintrob, Owen Cotton-Barratt
March 2025
Humanity is not prepared for the AI-driven challenges we face. But the right AI tools could help us to anticipate and work together to meet these challenges — if they’re available in time. We can and should accelerate these tools.
Key applications include (1) epistemic tools, which improve human judgement; (2) coordination tools, which help diverse groups work identify and work towards shared goals; (3) risk-targeted tools to address specific challenges.
We can accelerate important tools by investing in task-relevant data, lowering adoption barriers, and securing compute for key R&D. While background AI progress limits potential gains, even small speedups could be decisive.
This is a priority area. There is lots to do already, and there will quickly be more. We should get started, and we should plan for a world with abundant cognition.
Research
The AI Adoption Gap: Preparing the US Government for Advanced AI
Differential AI acceleration
Will the Need to Retrain AI Models from Scratch Block a Software Intelligence Explosion?
Modelling AI progress
Will AI R&D Automation Cause a Software Intelligence Explosion?
Modelling AI progress
How Can AI Labs Incorporate Risks From AI Accelerating AI Progress Into Their Responsible Scaling Policies?
Corporate Governance
Once AI Research is Automated, Will AI progress Accelerate?
Modelling AI progress
Three Types of Intelligence Explosion
Modelling AI progress
How Far Can AI Progress Before Hitting Effective Physical Limits?
Modelling AI progress
How Suddenly will AI Accelerate the Pace of AI progress?
Modelling AI progress
AI Tools for Existential Security
Differential AI acceleration
Intelsat as a Model for International AGI Governance
International governance
Preparing for the Intelligence Explosion
Modelling AI progress
Could Advanced AI Accelerate the Pace of AI Progress? Interviews with AI Researchers
Modelling AI progress