UN Report: How to stop risking human extinction

Since 1990, the United Nations Development Program has been mandated to publish reports on the state of the world every few years. The 2021/2022 report – released earlier this month and the first since the start of the Covid-19 pandemic – is titled Uncertain Times, Unsettled Lives. And unsurprisingly, it makes for stressful reading.

“The war in Ukraine is reverberating around the world,” the report begins, “causing immense human suffering, including a cost of living crisis. Climate and environmental disasters threaten the world every day. It’s seductively easy to dismiss crises as one-off events and hope for a return to normalcy. But putting out the last fire or kicking out the latest demagogue will be an unwinnable mole game if we don’t come to terms with the fact that the world is fundamentally changing. There is no turning back.”

These words ring true. Just a few years ago we lived in a world where experts had long warned of an impending pandemic that could be devastating – now we live in a world that has clearly been ravaged by a pandemic. As recently as a year ago, Europe had not seen a major land war since World War II, and some pundits were optimistic that two countries would never go to war with McDonald’s.

Now, not only does Russia occupy parts of Ukraine, but the annihilation of the Russian army in the fighting there has also sparked other regional instabilities, most notably Azerbaijan’s attack on Armenia earlier this month. The fear of using nuclear weapons in wartime, which has died down since the Cold War, is back as people worry about whether Putin might resort to tactical nuclear weapons in a total defeat in Ukraine.

Read  How To Grow Your Business Profitably In A Competitive Environment

Of course, all of these situations are possible – even probable – to resolve without catastrophe. The worst rarely happens. But it’s hard to avoid the feeling that we’re just rolling the dice and hoping we don’t hit an unlucky number at some point. Any pandemic, any minor war between nuclear powers, any new and uncontrolled technology can present little chance of being escalated into an event of catastrophic proportions. But if we take that risk every year without taking precautions, humanity’s lifespan may be limited.

Why “livelihood security” is the opposite of “livelihood risk”.

Toby Ord, Senior Research Fellow at the Future of Humanity Institute in Oxford and author of the book On Existential Risks The Abyss: Existential Risk and the Future of Humanity, explores this question in an essay in the latest UNDP report. He calls it the problem of “existential security”: the challenge of not just preventing every single looming catastrophe, but building a world that stops rolling the dice on possible extinction.

“To survive,” he writes in the report, “we must achieve two things. We must first lower the current level of existential risks — the fires we are already facing from the threat of nuclear war and climate change. But we can’t always fight fires. A hallmark of existential risk is that there are no second chances—a single existential catastrophe would be our permanent undoing. Therefore, we also need to create the equivalent of fire departments and fire regulations – make institutional changes to ensure that existential risk (including that from new technologies and developments) remains low forever.”

Read  How to engage, commemorate Orange Shirt Day this Sept. 30

He illustrates the point with this rather chilling graphic:

Toby Ord, UN Human Development Report 2021-2022

The idea is this: Suppose we are living through a situation where a dictator is threatening nuclear war, or where tensions between two nuclear powers seem to be reaching breaking point. Perhaps most of the time the situation is defused, as indeed it was during the many, many near-challenges of the Cold War. But if this situation returns every few decades, then we’ll probably defuse every single potential nuclear war is steadily lower. The probability that humanity will still be around 200 years from now becomes pretty slim after all, just as the probability that you can keep winning at craps decreases with every roll.

“Existential security” is that state in which, for the most part, in any given year, decade, or ideally even century, we are not exposed to risks that have a significant chance of annihilating civilization. For example, for existential security from nuclear risk, we may reduce nuclear arsenals to a point where even full nuclear replacement would pose no risk of civilization collapse, something the world has made significant progress on than countries reduced their nuclear arsenals after the Cold War. For existential security against pandemics, we could develop PPE that is comfortable to wear and offers near total protection against disease, and a global system for early disease detection – to ensure that any catastrophic pandemic is nipped in the bud and people protected from it can become.

The ideal, however, would be existential security above all – not only from the known, but also from the unknown. For example, a major concern among experts, including Ord, is that once we build high-performance artificial intelligence, AI will dramatically accelerate the development of new technologies that endanger the world, while – due to the way modern AI Systems are designed – it will be incredibly difficult to tell what it does or why.

So, an ideal approach to managing existential risk not only combats today’s threats, but creates policies that prevent future threats from occurring.

That sounds good. As long-term researchers have recently argued, existential risks pose a particularly devastating threat because they could destroy not only the present, but a future that could one day inhabit hundreds of billions of people. But how do we make it happen?

Ord proposes “an institution aimed at existential security”. He points out that preventing the end of the world should be precisely what should fall within the remit of the United Nations – after all, “the risks that could destroy us transcend national borders,” he writes. The problem, according to Ord, is that in order for an institution to prevent existential risks, it would have to have extensive means of intervening in the world. No country wants another country to be allowed to conduct an incredibly dangerous research program, but at the same time no country wants other countries to be in control of their own research programs. Only a supranational agency – something like the International Atomic Energy Agency, but with a much broader remit – could possibly overcome these narrower national concerns.

Often the hard part in securing the future of humanity isn’t figuring out what needs to be done, but actually doing it. With climate change, the problem and risks were well known long before the world took action to move away from greenhouse gases. Experts warned of the risks of pandemics before Covid-19 struck, but they largely went unheeded – and institutions that thought the US was ready, like the CDC, turned out to be flat-footed during a real crisis. Today there are expert warnings about artificial intelligence, but other experts assure us that there will be no problem and we don’t have to try to solve it.

Writing reports only helps if people read them; Building an international institute for existential security will only work if there is a way to turn the study of existential risks into serious, coordinated action to ensure we do not face them. “Right now there isn’t enough support,” Ord admits, but “this may change over the years or decades as people slowly face the severity of the threats humanity faces.”

Ord isn’t speculating on what this change might do, but personally I’m pessimistic. Anything that changes the international order enough to support international institutions with real authority over existential risks would likely have to be a devastating disaster itself. It seems unlikely that we’ll make it down the ‘existential security’ path without taking some serious risks – which we’ll hopefully survive to learn from.

Leave a Comment

Your email address will not be published.