When people hear “climate change,” “AI risk,” or “existential threats,” they usually think short-term.
Election cycles. Quarterly profits. This decade. But in What We Owe the Future, William MacAskill makes a radical argument: The most important people in the world are the ones who haven’t been born yet.
This isn’t science fiction. It’s moral philosophy, backed by data, history, and hard reasoning. And whether you agree with him or not, this book will fundamentally change how you think about responsibility, ethics, and the long-term future of humanity.
What Is What We Owe the Future About?
At its core, this book is about longtermism; the idea that positively influencing the long-term future is a key moral priority of our time.
MacAskill argues:
-
There could be trillions of humans in the future.
-
Our actions today shape whether they exist — and how well they live.
-
Therefore, the moral stakes of our era are enormous.
It’s closely connected to the philosophy of effective altruism, a movement that asks: How can we use evidence and reason to do the most good? But this book goes bigger than charity. It looks at civilization-level risks, and how to prevent catastrophe.
The Big Idea: We Are Living at a Pivotal Moment in History
MacAskill introduces the concept of the “hinge of history.” This is the idea that some eras matter more than others because decisions made during them shape the entire trajectory of humanity. He suggests we might be living in one of those eras because of:
-
Artificial intelligence
-
Biotechnology
-
Nuclear weapons
-
Climate change
-
Political instability
If we navigate these technologies wisely, the future could be extraordinarily prosperous. If we fail, we could permanently derail humanity’s potential. That’s a staggering claim. But the book builds the case carefully, using:
-
Historical data on progress
-
Moral philosophy
-
Probability analysis
-
Technological forecasting
AI Risk and Existential Threats
One of the most discussed sections focuses on artificial intelligence. MacAskill explores the possibility that advanced AI systems could:
-
Outperform humans in critical domains
-
Escape meaningful control
-
Create irreversible consequences
He doesn’t argue doom is inevitable. He argues the stakes are high enough that ignoring the risk is irresponsible. The book also discusses:
-
Engineered pandemics
-
Nuclear war
-
Extreme climate change
-
Value lock-in (where harmful systems become permanently entrenched)
The key theme? Small probabilities × massive consequences = moral urgency.
Moral Progress Is Real, But Not Guaranteed
One of the book’s most hopeful sections looks backward. Human history shows undeniable moral progress:
-
The abolition of slavery
-
Expanded rights for women
-
Reductions in extreme poverty
-
Growing global cooperation
But MacAskill warns: Progress isn’t automatic. It required:
-
Institutions
-
Activism
-
Scientific advancement
-
Moral courage
And future progress depends on us safeguarding civilization.
Longtermism Explained (Without the Jargon)
Here’s the simplified version: If future generations could vastly outnumber us, then protecting their potential well-being might be one of the most important moral tasks imaginable. That means:
-
Preventing extinction risks
-
Building stable, ethical institutions
-
Guiding powerful technologies responsibly
-
Investing in long-term solutions, not short-term gains
This doesn’t mean ignoring present suffering. It means expanding the moral circle across time.
Criticism and Controversy
To its credit, the book acknowledges objections. Critics argue:
-
The future is too uncertain to prioritize.
-
Longtermism could distract from urgent current issues.
-
Predicting civilization-level outcomes is speculative.
MacAskill responds by emphasizing expected value reasoning, even if probabilities are uncertain, extremely large future stakes still matter morally. You don’t have to fully agree to find the argument intellectually stimulating.
How This Book Compares to Other Big-Idea Books
If you’ve read:
-
Sapiens
-
Superintelligence
-
The Precipice
You’ll recognize similar themes about humanity’s trajectory and existential risk.
But What We Owe the Future is more accessible than most academic philosophy texts.
It blends:
-
Storytelling
-
Data
-
Moral reasoning
-
Practical implications
Key Lessons from What We Owe the Future
1. The Future Could Be Vast
Human civilization may just be beginning. The potential scale of future life is almost incomprehensible.
2. Existential Risk Reduction Is Underrated
Preventing extinction or irreversible collapse could be one of the most impactful actions possible.
3. Institutions Matter More Than Individuals
Stable governance, scientific integrity, and ethical frameworks determine long-term outcomes.
4. Moral Circles Expand Over Time
Our ethical concern has historically widened — from tribe to nation to humanity. The next step may be concern across generations.
5. We Have More Influence Than We Think
Small groups have historically changed global outcomes. We are not powerless observers.
Who Should Read This Book?
This book is ideal for readers interested in:
-
Longtermism explained
-
Effective altruism philosophy
-
AI risk and existential threats
-
Climate change ethics
-
Moral philosophy books
-
The future of humanity
-
Global catastrophic risk
It’s particularly relevant for policymakers, technologists, investors, and anyone thinking seriously about the next 100+ years.
Final Verdict: Is What We Owe the Future Worth Reading?
Yes — especially if you enjoy big, civilization-level thinking. It challenges short-termism. It expands moral imagination. It reframes what responsibility means in an age of powerful technology.
Even if you reject some of its conclusions, you won’t walk away thinking smaller. And that alone makes it valuable.
