The future has many forks in the road.
Apologies, but this diary requires reading.
We all feel it don't we? Something seriously wrong and deadly is terrifyingly tiptoeing behind us, bringing a chill down our spine with the hauntings of losing our democracy, white nationalist violence, the looming threat of the death of the biosphere, economic collapse, pandemics, and the unrelenting heating of the planet. Firestorms, wind storms, drought, and flooding images assault our senses daily. In the words of J.R.R. Tolkein,"The world is changed. I feel it in the water. I feel it in the earth. I smell it in the air. Much that once was is lost, for none now live who remember it."
I had not heard of longtermism before until an article on the loony altruistic philosophy came across my Twitter feed. It was longterism that inspired Elon Musk to buy Twitter.
...the plan is to take some contemporary hunter-gatherers — whose populations have been decimated by industrial civilization — and stuff them into bunkers with instructions to rebuild industrial civilization in the event that ours collapses. This is, as Audra Mitchell and Aadita Chaudhury write, "a stunning display of white possessive logic."
Emile P Torres writes in Salon about who these people are and how toxic they have become:
Perhaps you've stumbled upon the New Yorker profile of William MacAskill, the public face of longtermism. Or read MacAskill's recent opinion essay in the New York Times. Or seen the cover story in TIME magazine: "How to Do More Good." Or noticed that Elon Musk retweeted a link to MacAskill's new book, "What We Owe the Future," with the comment, "Worth reading. This is a close match for my philosophy."
As I have previously written, longtermism is arguably the most influential ideology that few members of the general public have ever heard about. Longtermists have directly influenced reports from the secretary-general of the United Nations; a longtermist is currently running the RAND Corporation; they have the ears of billionaires like Musk; and the so-called Effective Altruism community, which gave rise to the longtermist ideology, has a mind-boggling $46.1 billion in committed funding. Longtermism is everywhere behind the scenes — it has a huge following in the tech sector — and champions of this view are increasingly pulling the strings of both major world governments and the business elite.
Longtermism is a quasi-religious worldview, influenced by transhumanism and utilitarian ethics, which asserts that there could be so many digital people living in vast computer simulations millions or billions of years in the future that one of our most important moral obligations today is to take actions that ensure as many of these digital people come into existence as possible.
In practical terms, that means we must do whatever it takes to survive long enough to colonize space, convert planets into giant computer simulations and create unfathomable numbers of simulated beings.
Additionally, longtermism distracts from today's maladies. They sincerely believe and assert that "hypothetical future lives can morally compete with those alive today," wrote Seth Lazar in a tweet removed from the Twitter platform.
Dave Karph examines the stories that tech billionaires like to tell themselves.
Longtermism is very Oxford/Cambridge and is steeped in moral philosophy. William MacAskill gives off a strong Chidi-Anagonye-with-a-Scottish-brogue vibe. Longtermists are also tech accelerationists and techno-optimists. They also believe we are at a fulcrum point in human history. But they add in a layer of utilitarian calculus, arguing:
(1) future people have the same moral worth as people living today (MacAskill writes “Future people are utterly disenfranchised… They are the true silent majority.”)
(2) if we succeed in spreading the light of consciousness throughout the cosmos, there will be trillions upon trillions of future people, so their interests far outweigh our own.
(3) We thus ought to focus on preventing “existential risks” — asteroid strikes, bioweapons, and (especially) hostile artificial intelligence — that could be extinction-level events.
Ph.D. candidate in philosophy at Leibniz Universität Hannover in Germany, Émile P Torres, writes in Aeon on how entrenched longtermism is among world leaders.
Yet this is not the case: the topic of our extinction has received little sustained attention from philosophers until recently, and even now remains at the fringe of philosophical discussion and debate. On the whole, they have been preoccupied with other matters. However, there is one notable exception to this rule: over the past two decades, a small group of theorists mostly based in Oxford have been busy working out the details of a new moral worldview called longtermism, which emphasizes how our actions affect the very long-term future of the universe – thousands, millions, billions, and even trillions of years from now. This has roots in the work of Nick Bostrom, who founded the grandiosely named Future of Humanity Institute (FHI) in 2005, and Nick Beckstead, a research associate at FHI and a programme officer at Open Philanthropy. It has been defended most publicly by the FHI philosopher Toby Ord, author of The Precipice: Existential Risk and the Future of Humanity (2020). Longtermism is the primary research focus of both the Global Priorities Institute (GPI), an FHI-linked organisation directed by Hilary Greaves, and the Forethought Foundation, run by William MacAskill, who also holds positions at FHI and GPI. Adding to the tangle of titles, names, institutes and acronyms, longtermism is one of the main ‘cause areas’ of the so-called effective altruism (EA) movement, which was introduced by Ord in around 2011 and now boasts of having a mind-boggling $46 billion in committed funding.
Meanwhile, the billionaire libertarian and Donald Trump supporter Peter Thiel, who once gave the keynote address at an EA conference, has donated large sums of money to the Machine Intelligence Research Institute, whose mission to save humanity from superintelligent machines is deeply intertwined with longtermist values. Other organisations such as GPI and the Forethought Foundation are funding essay contests and scholarships in an effort to draw young people into the community, while it’s an open secret that the Washington, DC-based Center for Security and Emerging Technologies (CSET) aims to place longtermists within high-level US government positions to shape national policy. In fact, CSET was established by Jason Matheny, a former research assistant at FHI who’s now the deputy assistant to US President Joe Biden for technology and national security. Ord himself has, astonishingly for a philosopher, ‘advised the World Health Organization, the World Bank, the World Economic Forum, the US National Intelligence Council, the UK Prime Minister’s Office, Cabinet Office, and Government Office for Science’, and he recently contributed to a report from the Secretary-General of the United Nations that specifically mentions ‘long-termism’.
The point is that longtermism might be one of the most influential ideologies that few people outside of elite universities and Silicon Valley have ever heard about. I believe this needs to change because, as a former longtermist who published an entire book four years ago in defence of the general idea, I have come to see this worldview as quite possibly the most dangerous secular belief system in the world today.
The most dangerous aspects of longertermism are their climate claims. Emile P Torres writes again in the Bulletin of the Atomic Scientists.
Remarkably these people believe that a temperature rise of 55 degrees F is survivable and that the billions of people currently living that will die from such an unlivable world are a necessary sacrifice for people yet to be born trillions of years from now.
It’s impossible to read the longtermist literature published by the group 80,000 Hours (co-founded by MacAskill), Halstead, and others without coming away with a rosy picture of the climate crisis. Statements about climate change being bad are frequently followed by qualifiers such as “although,” “however,” and “but.” There’s lip service to issues like climate justice—the fact that the Global North is primarily responsible for a problem that will disproportionately affect the Global South—but ultimately what matters to longtermists is how humanity fares millions, billions, and even trillions of years from now. In the grand scheme of things, even a “giant massacre for man” would be, in Bostrom’s words, nothing but “a small misstep for mankind” if some group of humans managed to survive and rebuild civilization.
One finds the same insouciant attitude about climate change in MacAskill’s recent book. For example, he notes that there is a lot of uncertainty about the impacts of extreme warming of 7 to 10 degrees Celsius but says “it’s hard to see how even this could lead directly to civilisational collapse.” MacAskill argues that although “climatic instability is generally bad for agriculture,” his “best guess” is that “even with fifteen degrees of warming, the heat would not pass lethal limits for crops in most regions,” and global agriculture would survive.
Assessing MacAskill’s climate claims.
These claims struck me as dubious, but I’m not a climate scientist or agriculture expert, so I contacted a number of leading researchers to find out what they thought. They all told me that MacAskill’s climate claims are wrong or, at best, misleading.
For example, I shared the section about global agriculture with Timothy Lenton, who directs the Global Systems Institute and is Chair in Climate Change and Earth System Science at the University of Exeter. Lenton told me that MacAskill’s assertion about 15 degrees of warming is “complete nonsense—we already show that in a 3-degree-warmer world there are major challenges of moving niches for human habitability and agriculture.”
Emile also interviewed climatologists Michael Mann and Gerardo Ceballos, who were equally non-plussed with the nonsense over the billionaire's adherence to non-existential issues over the irrefutable science of the existential crisis we face today. IMHO, downright diabolical.
Emile continues:
The experts I consulted had similar responses to another claim in MacAskill’s book, that underpopulation is more worrisome than overpopulation—an idea frequently repeated by Elon Musk on social media. Ceballos, for example, replied: “More people will mean more suffering and a faster collapse,” while Philip Cafaro, an environmental ethicist, told me that MacAskill’s analysis is “just wrong on so many levels. . . It’s very clear that 8 billion people are not sustainable on planet Earth at anything like our current level of technological power and per-capita consumption. I think probably one to two billion people might be sustainable.”
For further reading
Effective altruism’s most controversial idea
Longtermism: how good intentions and the rich created a dangerous creed
Enough longtermism — we need to think about now
Alexander Zaitchik on Effective Altruism + Longtermism
Google is your friend with such complex altruism.
The video has a long ad at the beginning. You can skip to 1.20 in the video for the interesting discussion.