Anthropogenic global catastrophic risk

A global catastrophic risk is a hypothetical future event which could damage human well-being on a global scale, even crippling or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity’s potential is known as an existential risk.

Potential global catastrophic risks include anthropogenic risks, caused by humans (technology, governance, climate change), and natural or external risks. Examples of technology risks are hostile artificial intelligence and destructive biotechnology or nanotechnology. Insufficient or malign global governance creates risks in the social and political domain, such as a global war, including nuclear holocaust, bioterrorism using genetically modified organisms, cyberterrorism destroying critical infrastructure like the electrical grid; or the failure to manage a natural pandemic. Problems and risks in the domain of earth system governance include global warming, environmental degradation, including extinction of species, famine as a result of non-equitable resource distribution, human overpopulation, crop failures and non-sustainable agriculture. Examples of non-anthropogenic risks are an asteroid impact event, a supervolcanic eruption, a lethal gamma-ray burst, a geomagnetic storm destroying electronic equipment, natural long-term climate change, or hostile extraterrestrial life.

Classifications

Global catastrophic vs. existential
Philosopher Nick Bostrom classifies risks according to their scope and intensity. A “global catastrophic risk” is any risk that is at least “global” in scope, and is not subjectively “imperceptible” in intensity. Those that are at least “trans-generational” (affecting all future generations) in scope and “terminal” in intensity are classified as existential risks. While a global catastrophic risk may kill the vast majority of life on earth, humanity could still potentially recover. An existential risk, on the other hand, is one that either destroys humanity (and, presumably, all but the most rudimentary species of non-human lifeforms and/or plant life) entirely or at least prevents any chance of civilization recovering. Bostrom considers existential risks to be far more significant.

Similarly, in Catastrophe: Risk and Response, Richard Posner singles out and groups together events that bring about “utter overthrow or ruin” on a global, rather than a “local or regional” scale. Posner singles out such events as worthy of special attention on cost-benefit grounds because they could directly or indirectly jeopardize the survival of the human race as a whole. Posner’s events include meteor impacts, runaway global warming, grey goo, bioterrorism, and particle accelerator accidents.

Researchers experience difficulty in studying near human extinction directly, since humanity has never been destroyed before. While this does not mean that it will not be in the future, it does make modelling existential risks difficult, due in part to survivorship bias.

Other classifications
Bostrom identifies four types of existential risk. “Bangs” are sudden catastrophes, which may be accidental or deliberate. He thinks the most likely sources of bangs are malicious use of nanotechnology, nuclear war, and the possibility that the universe is a simulation that will end. “Crunches” are scenarios in which humanity survives but civilization is slowly destroyed. The most likely causes of this, he believes, are exhaustion of natural resources, a stable global government that prevents technological progress, or dysgenic pressures that lower average intelligence. “Shrieks” are undesirable futures. For example, if a single mind enhances its powers by merging with a computer, it could dominate human civilization. Bostrom believes that this scenario is most likely, followed by flawed superintelligence and a repressive totalitarian regime. “Whimpers” are the gradual decline of human civilization or current values. He thinks the most likely cause would be evolution changing moral preference, followed by extraterrestrial invasion.

Moral importance of existential risk

Some scholars have strongly favored reducing existential risk on the grounds that it greatly benefits future generations. Derek Parfit argues that extinction would be a great loss because our descendants could potentially survive for four billion years before the expansion of the Sun makes the Earth uninhabitable. Nick Bostrom argues that there is even greater potential in colonizing space. If future humans colonize space, they may be able to support a very large number of people on other planets, potentially lasting for trillions of years. Therefore, reducing existential risk by even a small amount would have a very significant impact on the expected number of people who will exist in the future.

Exponential discounting might make these future benefits much less significant. However, Jason Matheny has argued that such discounting is inappropriate when assessing the value of existential risk reduction.

Some economists have discussed the importance of global catastrophic risks, though not existential risks. Martin Weitzman argues that most of the expected economic damage from climate change may come from the small chance that warming greatly exceeds the mid-range expectations, resulting in catastrophic damage. Richard Posner has argued that we are doing far too little, in general, about small, hard-to-estimate risks of large-scale catastrophes.

Numerous cognitive biases can influence people’s judgment of the importance of existential risks, including scope insensitivity, hyperbolic discounting, availability heuristic, the conjunction fallacy, the affect heuristic, and the overconfidence effect.

Scope insensitivity influences how bad people consider the extinction of the human race to be. For example, when people are motivated to donate money to altruistic causes, the quantity they are willing to give does not increase linearly with the magnitude of the issue: people are roughly as concerned about 200,000 birds getting stuck in oil as they are about 2,000. Similarly, people are often more concerned about threats to individuals than to larger groups.

Potential sources of risk
Some sources of catastrophic risk are natural, such as meteor impacts or supervolcanoes. Some of these have caused mass extinctions in the past. On the other hand, some risks are man-made, such as global warming, environmental degradation, engineered pandemics and nuclear war.

Anthropogenic
The Cambridge Project at Cambridge University states that the “greatest threats” to the human species are man-made; they are artificial intelligence, global warming, nuclear war, and rogue biotechnology. The Future of Humanity Institute also states that human extinction is more likely to result from anthropogenic causes than natural causes.

Artificial intelligence
It has been suggested that learning computers that rapidly become superintelligent may take unforeseen actions, or that robots would out-compete humanity (one technological singularity scenario). Because of its exceptional scheduling and organizational capability and the range of novel technologies it could develop, it is possible that the first Earth superintelligence to emerge could rapidly become matchless and unrivaled: conceivably it would be able to bring about almost any possible outcome, and be able to foil virtually any attempt that threatened to prevent it achieving its objectives. It could eliminate, wiping out if it chose, any other challenging rival intellects; alternatively it might manipulate or persuade them to change their behavior towards its own interests, or it may merely obstruct their attempts at interference. In Bostrom’s book, Superintelligence: Paths, Dangers, Strategies, he defines this as the control problem. Physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX founder Elon Musk have echoed these concerns, with Hawking theorizing that this could “spell the end of the human race”.

In 2009, the Association for the Advancement of Artificial Intelligence (AAAI) hosted a conference to discuss whether computers and robots might be able to acquire any sort of autonomy, and how much these abilities might pose a threat or hazard. They noted that some robots have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved “cockroach intelligence.” They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls. Various media sources and scientific groups have noted separate trends in differing areas which might together result in greater robotic functionalities and autonomy, and which pose some inherent concerns.

A survey of AI experts estimated that the chance of human-level machine learning having an “extremely bad (e.g., human extinction)” long-term effect on humanity is 5%. A survey by the Future of Humanity Institute estimated a 5% probability of extinction by superintelligence by 2100. Eliezer Yudkowsky believes that risks from artificial intelligence are harder to predict than any other known risks due to bias from anthropomorphism. Since people base their judgments of artificial intelligence on their own experience, he claims that they underestimate the potential power of AI.

Biotechnology
Biotechnology can pose a global catastrophic risk in the form of bioengineered organisms (viruses, bacteria, fungi, plants or animals). In many cases the organism will be a pathogen of humans, livestock, crops or other organisms we depend upon (e.g. pollinators or gut bacteria). However, any organism able to catastrophically disrupt ecosystem functions, e.g. highly competitive weeds, outcompeting essential crops, poses a biotechnology risk.

A biotechnology catastrophe may be caused by accidentally releasing a genetically engineered organism escaping from controlled environments, by the planned release of such an organism which then turns out to have unforeseen and catastrophic interactions with essential natural or agro-ecosystems, or by intentional usage of biological agents in biological warfare, bioterrorism attacks. Pathogens may be intentionally or unintentionally genetically modified to change virulence and other characteristics. For example, a group of Australian researchers unintentionally changed characteristics of the mousepox virus while trying to develop a virus to sterilize rodents. The modified virus became highly lethal even in vaccinated and naturally resistant mice. The technological means to genetically modify virus characteristics are likely to become more widely available in the future if not properly regulated.

Terrorist applications of biotechnology have historically been infrequent. To what extent this is due to a lack of capabilities or motivation is not resolved. However, given current development, more risk from novel, engineered pathogens is to be expected in the future. Exponential growth has been observed in the biotechnology sector, and Noun and Chyba predict that this will lead to major increases in biotechnological capabilities in the coming decades. They argue that risks from biological warfare and bioterrorism are distinct from nuclear and chemical threats because biological pathogens are easier to mass-produce and their production is hard to control (especially as the technological capabilities are becoming available even to individual users). A survey by the Future of Humanity Institute estimated a 2% probability of extinction from engineered pandemics by 2100.

Noun and Chyba propose three categories of measures to reduce risks from biotechnology and natural pandemics: Regulation or prevention of potentially dangerous research, improved recognition of outbreaks and developing facilities to mitigate disease outbreaks (e.g. better and/or more widely distributed vaccines).

Cyberattack
Cyberattacks have the potential to destroy everything from personal data to electric grids. Christine Peterson, co-founder and past president of the Foresight Institute, believes a cyberattack on electric grids has the potential to be a catastrophic risk.

Global warming
Global warming refers to the warming caused by human technology since the 19th century or earlier. Projections of future climate change suggest further global warming, sea level rise, and an increase in the frequency and severity of some extreme weather events and weather-related disasters. Effects of global warming include loss of biodiversity, stresses to existing food-producing systems, increased spread of known infectious diseases such as malaria, and rapid mutation of microorganisms. In November 2017, a statement by 15,364 scientists from 184 countries indicated that increasing levels of greenhouse gases from use of fossil fuels, human population growth, deforestation, and overuse of land for agricultural production, particularly by farming ruminants for meat consumption, are trending in ways that forecast an increase in human misery over coming decades.

Environmental disaster
An environmental or ecological disaster, such as world crop failure and collapse of ecosystem services, could be induced by the present trends of overpopulation, economic development, and non-sustainable agriculture. Most environmental scenarios involve one or more of the following: Holocene extinction event, scarcity of water that could lead to approximately one half of the Earth’s population being without safe drinking water, pollinator decline, overfishing, massive deforestation, desertification, climate change, or massive water pollution episodes. Detected in the early 21st century, a threat in this direction is colony collapse disorder, a phenomenon that might foreshadow the imminent extinction of the Western honeybee. As the bee plays a vital role in pollination, its extinction would severely disrupt the food chain.

An October 2017 report published in The Lancet stated that toxic air, water, soils, and workplaces were collectively responsible for 9 million deaths worldwide in 2015, particularly from air pollution which was linked to deaths by increasing susceptibility to non-infectious diseases, such as heart disease, stroke, and lung cancer. The report warned that the pollution crisis was exceeding “the envelope on the amount of pollution the Earth can carry” and “threatens the continuing survival of human societies”.

Mineral resource exhaustion
Romanian American economist Nicholas Georgescu-Roegen, a progenitor in economics and the paradigm founder of ecological economics, has argued that the carrying capacity of Earth — that is, Earth’s capacity to sustain human populations and consumption levels — is bound to decrease sometime in the future as Earth’s finite stock of mineral resources is presently being extracted and put to use; and consequently, that the world economy as a whole is heading towards an inevitable future collapse, leading to the demise of human civilization itself.:303f Ecological economist and steady-state theorist Herman Daly, a student of Georgescu-Roegen, has propounded the same argument by asserting that “… all we can do is to avoid wasting the limited capacity of creation to support present and future life [on Earth].”:370

Ever since Georgescu-Roegen and Daly published these views, various scholars in the field have been discussing the existential impossibility of distributing Earth’s finite stock of mineral resources evenly among an unknown number of present and future generations. This number of generations is likely to remain unknown to us, as there is little way of knowing in advance if or when mankind will eventually face extinction. In effect, any conceivable intertemporal distribution of the stock will inevitably end up with universal economic decline at some future point.:253–256:165:168–171:150–153:106–109:546–549:142–145

Experimental technology accident
Nick Bostrom suggested that in the pursuit of knowledge, humanity might inadvertently create a device that could destroy Earth and the Solar System. Investigations in nuclear and high-energy physics could create unusual conditions with catastrophic consequences. For example, scientists worried that the first nuclear test might ignite the atmosphere. More recently, others worried that the RHIC or the Large Hadron Collider might start a chain-reaction global disaster involving black holes, strangelets, or false vacuum states. These particular concerns have been refuted, but the general concern remains.

Biotechnology could lead to the creation of a pandemic, chemical warfare could be taken to an extreme, nanotechnology could lead to grey goo in which out-of-control self-replicating robots consume all living matter on earth while building more of themselves—in both cases, either deliberately or by accident.

Nanotechnology
Many nanoscale technologies are in development or currently in use. The only one that appears to pose a significant global catastrophic risk is molecular manufacturing, a technique that would make it possible to build complex structures at atomic precision. Molecular manufacturing requires significant advances in nanotechnology, but once achieved could produce highly advanced products at low costs and in large quantities in nanofactories of desktop proportions. When nanofactories gain the ability to produce other nanofactories, production may only be limited by relatively abundant factors such as input materials, energy and software.

Molecular manufacturing could be used to cheaply produce, among many other products, highly advanced, durable weapons. Being equipped with compact computers and motors these could be increasingly autonomous and have a large range of capabilities.

Chris Phoenix and Treder classify catastrophic risks posed by nanotechnology into three categories:

From augmenting the development of other technologies such as AI and biotechnology.
By enabling mass-production of potentially dangerous products that cause risk dynamics (such as arms races) depending on how they are used.
From uncontrolled self-perpetuating processes with destructive effects.

Several researchers state that the bulk of risk from nanotechnology comes from the potential to lead to war, arms races and destructive global government. Several reasons have been suggested why the availability of nanotech weaponry may with significant likelihood lead to unstable arms races (compared to e.g. nuclear arms races):

A large number of players may be tempted to enter the race since the threshold for doing so is low;
The ability to make weapons with molecular manufacturing will be cheap and easy to hide;
Therefore, lack of insight into the other parties’ capabilities can tempt players to arm out of caution or to launch preemptive strikes;
Molecular manufacturing may reduce dependency on international trade, a potential peace-promoting factor;
Wars of aggression may pose a smaller economic threat to the aggressor since manufacturing is cheap and humans may not be needed on the battlefield.
Since self-regulation by all state and non-state actors seems hard to achieve, measures to mitigate war-related risks have mainly been proposed in the area of international cooperation. International infrastructure may be expanded giving more sovereignty to the international level. This could help coordinate efforts for arms control. International institutions dedicated specifically to nanotechnology (perhaps analogously to the International Atomic Energy Agency IAEA) or general arms control may also be designed. One may also jointly make differential technological progress on defensive technologies, a policy that players should usually favour. The Center for Responsible Nanotechnology also suggests some technical restrictions. Improved transparency regarding technological capabilities may be another important facilitator for arms-control.

Grey goo is another catastrophic scenario, which was proposed by Eric Drexler in his 1986 book Engines of Creation and has been a theme in mainstream media and fiction. This scenario involves tiny self-replicating robots that consume the entire biosphere using it as a source of energy and building blocks. Nowadays, however, nanotech experts—including Drexler—discredit the scenario. According to Phoenix, a “so-called grey goo could only be the product of a deliberate and difficult engineering process, not an accident”.

Warfare and mass destruction
The scenarios that have been explored most frequently are nuclear warfare and doomsday devices. Although the probability of a nuclear war per year is slim, Professor Martin Hellman has described it as inevitable in the long run; unless the probability approaches zero, inevitably there will come a day when civilization’s luck runs out. During the Cuban missile crisis, U.S. president John F. Kennedy estimated the odds of nuclear war at “somewhere between one out of three and even”. The United States and Russia have a combined arsenal of 14,700 nuclear weapons, and there is an estimated total of 15,700 nuclear weapons in existence worldwide. Beyond nuclear, other military threats to humanity include biological warfare (BW). By contrast, chemical warfare, while able to create multiple local catastrophes, is unlikely to create a global one.

Nuclear war could yield unprecedented human death tolls and habitat destruction. Detonating large numbers of nuclear weapons would have an immediate, short term and long-term effects on the climate, causing cold weather and reduced sunlight and photosynthesis that may generate significant upheaval in advanced civilizations. However, while popular perception sometimes takes nuclear war as “the end of the world”, experts assign low probability to human extinction from nuclear war. In 1982, Brian Martin estimated that a US–Soviet nuclear exchange might kill 400–450 million directly, mostly in the United States, Europe and Russia, and maybe several hundred million more through follow-up consequences in those same areas. A survey by the Future of Humanity Institute estimated a 4% probability of extinction from warfare by 2100, with a 1% chance of extinction from nuclear warfare.

World population and agricultural crisis

The 20th century saw a rapid increase in human population due to medical developments and massive increases in agricultural productivity such as the Green Revolution. Between 1950 and 1984, as the Green Revolution transformed agriculture around the globe, world grain production increased by 250%. The Green Revolution in agriculture helped food production to keep pace with worldwide population growth or actually enabled population growth. The energy for the Green Revolution was provided by fossil fuels in the form of fertilizers (natural gas), pesticides (oil), and hydrocarbon-fueled irrigation. David Pimentel, professor of ecology and agriculture at Cornell University, and Mario Giampietro, senior researcher at the National Research Institute on Food and Nutrition (INRAN), place in their 1994 study Food, Land, Population and the U.S. Economy the maximum U.S. population for a sustainable economy at 200 million. To achieve a sustainable economy and avert disaster, the United States must reduce its population by at least one-third, and world population will have to be reduced by two-thirds, says the study.

The authors of this study believe that the mentioned agricultural crisis will begin to have an effect on the world after 2020, and will become critical after 2050. Geologist Dale Allen Pfeiffer claims that coming decades could see spiraling food prices without relief and massive starvation on a global level such as never experienced before.

Wheat is humanity’s third-most-produced cereal. Extant fungal infections such as Ug99 (a kind of stem rust) can cause 100% crop losses in most modern varieties. Little or no treatment is possible and infection spreads on the wind. Should the world’s large grain-producing areas become infected, the ensuing crisis in wheat availability would lead to price spikes and shortages in other food products.

Risk perception
According to Eliezer Yudkowsky, many cognitive biases can influence the way in which individuals and groups consider the importance of global disaster risks, including insensitivity to scope, heuristic availability, representativity bias, affect heuristics, and the effect of overconfidence. For example, insensitivity to scope leads people to be more often concerned with individual threats than those addressed to larger groups (this is why their donations for altruistic causes are not proportional to the magnitude of the problem); that is why they do not consider the extinction of humanity as a problem as serious as it should be. Similarly, the representativeness bias leads them to minimize disasters that have little to do with those they were aware, and assuming that the damage they cause will not be much more serious.

It has often been noticed that the majority of anthropogenic risks mentioned above often correspond to very ancient myths, those of Prometheus, in Pandora and more recently that of the sorcerer’s apprentice is the most representative. The symbolism of the four Horsemen of the Apocalypse, the last three representative War, Famine and Death, is already in the Old Testament as the uncomfortable choice offered by God to King David. The various risks of machine revolt appear in the myth of the Golem, and, combined with biotechnologies, in the story of Frankenstein’s monster. On the other hand, it has been suggested that disaster narratives of various religious traditions (where they are most often related to the wrath of deities) would correspond to memories of real catastrophes (eg the Flood would be linked to the re-connection the Sea of Marmara with the Black Sea); under the name of catastrophism coherent (coherent catastrophism), Victor Clube and Bill Napier developed the hypothesis that cataclysmic meteor showers have given birth to many cosmological myths, ranging from the history of the destruction of Sodom and Gomorrah(thesis also defended by Marie-Agnes Courty) to the descriptions of Revelation; However, their ideas are well accepted by the scientific community.

The existence of these “mythical” interpretations, as well as numerous end-of-the-world prophecies No. 15, facilitates a phenomenon of partial or total refusal to take into account these disaster risks, known as Cassandra syndrome: while the anthropogenic risks are minimized by attributing them to irrational fears, the catastrophes described in the myths are judged exaggerated by the ignorance and the deformation of the memories.

The analysis of risks caused by humans suffers from two opposing biases: whistleblowers tend to exaggerate the risk to be heard, or even to denounce imaginary risks in the name of the precautionary principle; powerful economic interests trying to reverse minimize the risks associated with their activities, as shown for example the case of the institute Heartland, and more generally the analysis of disinformation strategies outlined in The doubt Merchants.

Giving a rational interpretation of the myth of the Golden Age No 20, Jared Diamond is finally observed that some disasters (the “collapses” of Nick Bostrom) can go undetected companies that suffer the lack of a memory sufficient history; it is such as he explains the ecological disaster suffered by the inhabitants of Easter Island.

Precautions and prevention
The concept of global governance respecting planetary boundaries has been proposed as an approach to disaster risk reduction. In particular, the field of geoengineering envisions manipulating the global environment to combat anthropogenic changes in atmospheric composition. Comprehensive food storage and conservation techniques have been explored, but their costs would be high, and they could aggravate the consequences of malnutrition. David Denkenberger and Joshua Pearce have suggested using a variety of alternative foods to reduce the risk of starvationrelated to global catastrophes such as a nuclear winter or sudden climate change, for example, converting biomass (trees and timber) into edible products; However, it will take a lot of progress in this area so that these methods allow a large fraction of the population to survive. Other risk-reduction suggestions, such as asteroid deflection strategies to deal with impact risks, or nuclear disarmament, prove to be economically or politically difficult to implement. Finally, The colonization of space is another proposal made to increase the chances of survival in the face of an existential risk, but solutions of this type, which are currently inaccessible, will no doubt require, among other things, the use of large-scale engineering.

Among the precautions actually taken individually or collectively include:

The establishment of food reserves (planned for several years) and other resources made by survivalists in the framework of, for example, the construction of antiatomic shelters.

The Svalbard World Seed Reserve, an underground vault on the Norwegian island of Spitsbergen, intended to keep seeds of all food crops in the world safe and secure, thus preserving genetic diversity; some of these seeds should be kept for several thousand years. In May 2017, the vault was flooded with permafrost melting due to global warming, without damaging the seed supply.

Analyzes and reviews
The importance of the risks detailed in the previous sections is rarely denied, even though the risks to humans are often minimized; however, Nick Bostrom’s analyzes have been criticized from several distinct perspectives.

Technical Reviews
Many of the risks mentioned by Nick Bostrom in his books are considered exaggerated (even imaginary), or correspond to time scales so vast that it seems somewhat absurd to group them with almost immediate threats. Moreover, the calculations of probability, hope or utility are difficult or ill – defined for this kind of situation, as shown, for example, by paradoxes such as the apocalypse argument, and as Nick Bostrom acknowledges. himself. In particular, he developed an ethical argument claiming that the exorbitant number of our descendants doomed to nothingness by an existential catastrophe justifies employing every conceivable means to decrease, however little it is, the probability of the accident; however, the calculations on which it is based have been contested and this argument may well be only a fallacy.

Nick Bostrom and Max Tegmark published in 2005 an analysis of a risk of instability of the whole universe. Regardless of the validity of their calculations (tending to show that the risk is very low), one may wonder whether there really is a meaning to speak of a disaster of which no one would be warned in advance, and who leave no observer; during a similar discussion of the risk of a chain reaction igniting the whole atmosphere, a friend had responded to the anxieties Richard Hamming by “Do not worry, Hamming, there will be no one to blame you”.

Philosophical positions
Nick Bostrom’s analyzes are based on transhumanism, an ideology advocating the use of science and technology to improve the physical and mental characteristics of human beings; and he considers all that could prevent mankind from its full potential is an existential risk.

This position has been severely criticized, partly because it leads to denial of the values present humanity is attached on behalf of hypothetical future values. Steve Fuller particularly noted that if a global catastrophe does not destroy all humanity, the survivors may legitimately consider in some cases that their situation has improved.

Source from Wikipedia