The Case Against Saving All of Humanity

Tucked away on a sleepy side street, hidden within a set of nondescript cubicle buildings, lies Oxford’s Future of Humanity Institute. Here, some of the most talented philosophical minds on Earth have assembled to dedicate their lives to saving humanity from certain extinction. In conjunction with the Center for Effective Altruism, FHI combines rigorous argumentation with cutting-edge, data-driven algorithms to tackle some of the most pressing ethical issues facing us today; issues like climate change, poverty, nuclear weapons, pandemics, and pernicious artificial intelligence. Led by philosophers Nick Bostrom, Toby Ord, and Will MacAskill, the efforts of this small group have fast garnered worldwide attention, acclaim, and funding from leaders and influencers at the heights of government, industry, and Big Tech.            

Much of the mission of FHI and CEA centers around the welfare of future generations. In his 2018 TED talk, MacAskill notes,

Problems that affect future generations are often hugely neglected. Why? Because future people don’t participate in markets today. They don’t have a vote. It’s not like there’s a lobby representing those born in 2300 A.D. They don’t get to influence the decisions we make today. They’re voiceless.[i]

FHI and CEA have therefore taken it upon themselves to be the voice for such voiceless future persons. In so doing, they have advocated for policies, regulations, and prescriptions attendant to such existential concerns.

Saving all of humanity from itself, of course, comes with certain costs. As a 2019 Business Insider interview with Nick Bostrom notes,

Under Bostrom's vision of mass surveillance, humans would be monitored at all times via artificial intelligence, which would send information to "freedom centers" that work to           save us from doom. To make this possible, he said, all humans would have to wear necklaces, or "freedom tags," with multi-directional cameras…’Obviously there are huge downsides and indeed massive risks to mass surveillance and global governance.’ [ii]

On the surface, such sacrifices for the sake of all of future humanity seem justified. However, upon closer inspection, such arguments reveal serious moral and metaphysical complications. One of these complications is Derek Parfit’s famous ‘non-identity’ problem. We can articulate the problem as the following trilemma,

1. An act is wrong only if that act makes things ‘worse for’ some existing or future person.

2. Conferring an existence that is unavoidably flawed and yet not so flawed that it is less than worth having does not make things ‘worse for’ the person whose existence it is.

3. There seem to be certain, ‘existence-inducing acts’ that are still somehow wrong.[iii]

An example of such an ‘existence-inducing act’ might be something like a climate policy that allocates resources to present persons to the neglect of future persons. On the surface, selfishly allocating present resources to ourselves to the predictable detriment of our progeny seems to be clearly wrong. However, since the policy would metaphysically bring about both the suboptimal future state of affairs as well as the particular set of future persons to inherit that future state of affairs, then there seems to be no complaint such future persons would have against the policy, since for us to choose otherwise would bring into existence a different set of persons altogether.

For almost 40 years now, philosophers have debated over such non-identity complications, noting their relevance to pregnancy cases, wrongful life cases, climate and environmental cases, and issues of reparative justice. Consensus on the non-identity problem is still far from settled within the philosophical discourse, with philosophers arguing for its tremendous weight on the moral ledger to others arguing that it should not be weighed at all.

Such purely theoretic concerns are not the only complication to EA and FHI’s salvific mission, however. As Larry Tempkin rightly notes, there are also serious practical problems that often arise when such theoretic altruistic imperatives are implemented in the actual world at too large of a scale. Extended supply chains, bureaucratic inefficiencies, non-transparency, lack of accountability, abuse of power, moral hazards of various sorts, etc., taken to scale, often combine in unforeseen ways to make any sort of collective action, let alone global-level collective action, much less effective than more modest, national or local level efforts. To quote Tempkin,

“In many of the most desperate places on earth, there are aid groups after aid groups after aid groups falling on top of one another…and the sad reality is that often when the aid money pours in to help the needy people who are the victims of social injustice, some of that money, sometimes a surprisingly shockingly large amount ends up being syphoned off in a number of ways into the very perpetrators of the injustice in the first place.”[iv]

History abounds with such failures of implementation of well-intentioned, idealized visions at too large of a scale. One clear example of such failures of implementation is the 1992 United Nations’ ‘Operation Restore Hope’; a U.S. led humanitarian aid mission to restore infrastructure, food, resources, and aid to the people of Somalia. The initial humanitarian aid effort, well-intentioned as it was, ended up seeing the bulk of these resources syphoned away from the average Somali citizen and redirected into hands of warlord Mohamed Farrah Adid and his organized crime syndicate of tribal warlords and mercenaries. Only later on did the U.N. determine that the fair distribution of these humanitarian goods required not only good intentions but the application of military force to strongarm the region into a well-ordered governance so to allow for such fair distribution to take place at all.  This attempt ultimately failed a year later with the downing of two U.S. Blackhawk helicopters, the deaths of 19 American soldiers and 73 casualties, and a swift withdrawal and retreat of U.S. and U.N. forces.

While the case of Somalia stands as a particularly harsh example, the point about the disconnect between good intentions and effective implementation still matters with respect to the kind of future cosmopolitanism that Macaskill, Ord, and Bostrom envision. Indeed, the incalculable amount of time, funding, labor, and other resources allocated each year to such global-scale humanitarian projects, in all likelihood, translates into any number of frivolous and unnecessary expenditures, stark inefficiencies and bureaucratic bottlenecks, abuses of power of one kind or another, or end up transmogrifying into a warehouse full of life-saving resources destined to never to be delivered or worse.

Furthermore, even if such appeals to future generations are completely sound, it is not obvious that the specific prescriptions FHI and CEA argue for are the best means to realize such ends. Indeed, it very well could be the case that the first-order moral reasons regarding future generations will be more effectively realized via our existing Westphalian nation-state model or at an even smaller more localist level as opposed to the new transnational institutions that FHI and CEA prescribe and envision. Such complications should therefore at least give policymakers and donors pause before buying wholeheartedly into the strengthen arguments from FHI and CEA that justify sweeping and encroaching political overreach by appeals to the infinite disutility of future human extinction.

A quick response by FHI and CEA here might be to argue that these are not ‘in principle’ problems and that such inefficiencies could be overcome with a more refined means/ends operation, a more efficient flowchart, a more optimized Bayesian algorithm, and so on.

But herein, perhaps, lies the greater problem.

 For there is something deeply off-putting and indeed deeply inhuman about the methodological instinct to collapse everything of quality to quantity, the unquestioned faith in technological advancement, and the metaphysical assumption that morality can be fully reduced to a set of optimized decision procedures determined by technocratic experts from on high. To embark on such a project, from the very outset, seems to treat morality and, indeed, the whole of the human condition as little more than a kind of mathematics equation to be solved and little else. In so doing, such an approach arguably does greater violence to the set of human goods originally motivating us to direct our focus toward such future concerns in the first place. What goods, institutions, or forms of life then will we pass on to future generations if we have cannibalized all of them ostensibly for their future sake? The inheritance of an amnesiac, perpetual present, shorn of all tradition, cultural memory, and sense of one’s own place in history?

Lastly, the messianic character of such projects betrays both an ignorance of human history and of human nature, both in its capacity for great harm in its quest for utopia as well as its resilience and resourcefulness in the face of challenge and uncertainty. That being said, Bostrom, Ord, MacAskill, and others might therefore benefit from the words of C.S. Lewis when he wrote,

How are we to live in an atomic age? I am tempted to reply: Why, as you would have          lived in the sixteenth century when the plague visited London almost every year, or as you  would have lived in a Viking age when raiders from Scandinavia might land and cut your throat any night; or indeed, as you are already living in an age of cancer, an age of             syphilis, an age of paralysis, an age of air raids, an age of railway accidents, an age of        motor accidents. In other words, do not let us begin by exaggerating the novelty of our situation. If we are all going to be destroyed by an atomic bomb, let that bomb, when it comes, find us doing sensible and human things—praying, working, teaching, reading,     listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of darts—not huddled together like frightened sheep and thinking about bombs. They may break our bodies (a microbe can do that) but they need not dominate   our minds.[v]

Thus, when it comes to large-scale collective action problems concerning the fate of the entire world, let us make sure that we aren’t helping ourselves to moral reasons that aren’t sufficiently justified, let us make sure that the cure is not worse than the disease, and most of all, let us make sure that we do not lose our humanity in our attempts to save humanity.

——————————————————————————————————————[i]  MacAskill, Will TED Talk, 2018 available at: https://www.ted.com/talks/will_macaskill_what_are_the_most_important_moral_problems_of_our_time?language=en

[ii]  Bendix, Aria. An Oxford Scientist Who’s Inspired Elon Musk Thinks Mass Surveillance Might Be The Only Way to Save Humanity From Doom, Available at: https://www.businessinsider.com/nick-bostrom-mass-surveillance-could-save-humanity-2019-4[iii]

[iii] Stanford Encyclopedia of Philosophy, “Non-Identity Problem”, https://plato.stanford.edu/entries/nonidentity-problem/

[iv] Tempkin, Larry interview by Uehiro Center for Practical Ethics, Available at:

https://www.youtube.com/watch?v=l68pi6_alt0

[v] Lewis, C.S., “On Living in the Atomic Age”, (1948)