FuturologyBot t1_it78eyl wrote
The following submission statement was provided by /u/mossadnik:
Submission Statement:
>Since 1990, the United Nations Development Programme has been tasked with releasing reports every few years on the state of the world. The 2021/2022 report — released earlier this month, and the first one since the Covid-19 pandemic began — is titled “Uncertain Times, Unsettled Lives.”
>“The war in Ukraine reverberates throughout the world,” the report opens, “causing immense human suffering, including a cost-of-living crisis. Climate and ecological disasters threaten the world daily. It is seductively easy to discount crises as one-offs, natural to hope for a return to normal. But dousing the latest fire or booting the latest demagogue will be an unwinnable game of whack-a-mole unless we come to terms with the fact that the world is fundamentally changing. There is no going back.”
>Toby Ord, senior research fellow at Oxford’s Future of Humanity Institute and the author of the existential risk book The Precipice: Existential Risk and the Future of Humanity, explores this question in an essay in the latest UNDP report. He calls it the problem of “existential security”: the challenge not just of preventing each individual prospective catastrophe, but of building a world that stops rolling the dice on possible extinction.
>“To survive,” he writes in the report, “we need to achieve two things. We must first bring the current level of existential risk down — putting out the fires we already face from the threats of nuclear war and climate change. But we cannot always be fighting fires. A defining feature of existential risk is that there are no second chances — a single existential catastrophe would be our permanent undoing. So we must also create the equivalent of fire brigades and fire safety codes — making institutional changes to ensure that existential risk (including that from new technologies and developments) stays low forever.”
>“Existential security” is the state where we are mostly not facing risks in any given year, or decade, or ideally even century, that have a substantial chance of annihilating civilization. For existential security from nuclear risk, for instance, perhaps we reduce nuclear arsenals to the point where even a full nuclear exchange would not pose a risk of collapsing civilization, something the world made significant progress on as countries slashed nuclear arsenal levels after the Cold War. For existential security from pandemics, we could develop PPE that is comfortable to wear and provides approximately total protection against disease, plus a worldwide system to detect diseases early — ensuring that any catastrophic pandemic would be possible to nip in the bud and protect people from.
>The ideal, though, would be existential security from everything — not just from the knowns, but the unknowns. For example, one big worry among experts including Ord is that once we build highly capable artificial intelligences, AI will dramatically hasten the development of new technologies that imperil the world while — because of how modern AI systems are designed — it’ll be incredibly difficult to tell what it’s doing or why.
>So an ideal approach to managing existential risk doesn’t just fight today’s threats but makes policies that will prevent threats from arising in the future too.
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/y9sj8u/a_new_un_report_explores_how_to_make_human/it73cz3/
Viewing a single comment thread. View all comments