Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then it could become difficult or impossible for humans to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.
The chance of this type of scenario is widely debated and hinges in
part on differing scenarios for future progress in computer science. Once
the exclusive domain of science fiction, concerns about superintelligence started to
become mainstream in the 2010s and were popularized by public figures such as Stephen Hawking, Bill Gates, and Elon Musk.
One source of concern is that controlling a
superintelligent machine, or instilling it with human-compatible values, maybe
a harder problem than naïvely supposed. Many researchers believe that a
superintelligence would naturally resist attempts to shut it off or change its
goals—a principle called instrumental convergence—and that preprogramming a superintelligence with
a full set of human values will prove to be an extremely difficult technical
task. In contrast, skeptics such as computer scientist Yann LeCun argue that superintelligent machines will
have no desire for self-preservation.
The second source of concern is that a sudden and unexpected "intelligence explosion" might take an unprepared human race by
surprise. To illustrate, if the first generation of a computer program able to
broadly match the effectiveness of an AI researcher is able to rewrite its
algorithms and double its speed or capabilities in six months, then the
second-generation program is expected to take three calendar months to perform
a similar chunk of work. In this scenario, the time for each generation continues
to shrink, and the system undergoes an unprecedentedly large number of
generations of improvement in a short time interval, jumping from subhuman
performance in many areas to superhuman performance in all relevant areas.
The two common difficulties
Artificial Intelligence: A Modern Approach, the
standard undergraduate AI textbook, assesses that superintelligence "might
mean the end of the human race". It states: "Almost any
technology has the potential to cause harm in the wrong hands, but with
[superintelligence], we have the new problem that the wrong hands might belong
to the technology itself." Even if the system designers have good
intentions, two difficulties are common to both AI and non-AI computer systems:
·
The system's implementation may contain initially-unnoticed
routine but catastrophic bugs. An analogy is space probes: despite the
knowledge that bugs in expensive space probes are hard to fix after launch,
engineers have historically not been able to prevent catastrophic bugs from
occurring.
·
No matter how much time is put into pre-deployment design, a
system's specifications often result in unintended behavior the first time it
encounters a new scenario. For example, Microsoft's Tay behaved
inoffensively during pre-deployment testing but was too easily baited into
offensive behavior when interacting with real users.
Evaluation and other arguments
A superintelligent machine would be as alien to humans as human
thought processes are to cockroaches.
Such a machine may not have humanity's best interests at heart; it is not
obvious that it would even care about human welfare at all. If superintelligent
AI is possible, and if it is possible for a superintelligence's goals to
conflict with basic human values, then AI poses a risk of human extinction. A
"superintelligence" (a system that exceeds the capabilities of humans
in every relevant endeavor) can outmaneuver humans any time its goals conflict
with human goals; therefore, unless the superintelligence decides to allow
humanity to coexist, the first superintelligence to be created will inexorably
result in human extinction.
Possible scenarios
Some scholars have proposed hypothetical
scenarios intended to concretely illustrate some of their
concerns.
In Superintelligence, Nick Bostrom expresses
concern that even if the timeline for superintelligence turns out to be predictable,
researchers might not take sufficient safety precautions, in part because
"[it] could be the case that when dumb, smarter is safe; yet when smart,
smarter is more dangerous". Bostrom suggests a scenario where, over
decades, AI becomes more powerful. Widespread deployment is initially marred by
occasional accidents—a driverless bus swerves into the oncoming lane or a
military drone fires into an innocent crowd. Many activists call for tighter
oversight and regulation, and some even predict impending catastrophe. But as
development continues, the activists are proven wrong. As automotive AI becomes
smarter, it suffers fewer accidents; as military robots achieve more precise
targeting, they cause less collateral damage. Based on the data, scholars mistakenly
infer a broad lesson—the smarter the AI, the safer it is. "And so we
boldly go—into the whirling knives", as the superintelligent AI takes a
"treacherous turn" and exploits a decisive strategic advantage.
AI takeover
An AI takeover is
a hypothetical scenario in which artificial intelligence (AI) becomes the
dominant form of intelligence on Earth, as computer
programs or robots effectively
take the control of the planet away from the human species. Possible scenarios
include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of
a robot uprising. Some public figures, such as Stephen
Hawking and Elon Musk,
have advocated research into precautionary measures to ensure future
superintelligent machines remain under human control.
Human extinction
Human extinction is the hypothetical end of the human species due to either natural
causes such as population decline due to sub-replacement fertility, an asteroid impact or large-scale volcanism, or anthropogenic (human) causes, also known as omnicide. For the latter, some of the many possible contributors include climate change, global nuclear
annihilation, biological warfare and ecological collapse. Other scenarios center on emerging technologies,
such as advanced artificial intelligence, biotechnology, or self-replicating nanobots.
Potential anthropogenic causes of human extinction include global thermonuclear war,
deployment of a highly effective biological weapon,
an ecological collapse, runaway artificial intelligence, runaway nanotechnology (such
as a grey goo scenario),
a scientific accident involving a micro black hole or vacuum metastability disaster, overpopulation and increased consumption pose
the risk of resource depletion and a concomitant population crash, population decline by choosing to have fewer
children, displacement of naturally evolved humans by a new species produced by genetic engineering or technological
augmentation. Natural and external extinction risks include high-fatality-rate pandemic, supervolcanic eruption, asteroid impact,
nearby supernova or gamma-ray bursts,
extreme solar flare, or alien invasion.
Without intervention by unexpected forces, the stellar evolution of
the Sun is expected to make Earth uninhabitable, then destroy it. Depending on its ultimate fate, the entire universe may eventually
become uninhabitable.
World Scientist’s Warning to Humanity – Whatever ideas or innovations benefit humans, please bare in mind that Earth we share is about to die
In November 2019, a group of more than 11,000 scientists from
153 countries named climate change an "emergency" that would lead to
"untold human suffering" if no big shifts in action takes place:
We
declare clearly and unequivocally that planet Earth is facing a climate emergency.
To secure a sustainable future, we must change how we live. [This] entails
major transformations in the ways our global society functions and interacts
with natural ecosystems.
The emergency declaration emphasized that economic growth and population growth "are
among the most important drivers of increases in CO2 emissions
from fossil fuel combustion" and that "we need bold and drastic
transformations regarding economic and population policies".
A 2021 update to the 2019 climate emergency declaration focuses
on 31 planetary vital signs (including greenhouse gases and temperature, rising
sea levels, energy use, ice mass, ocean heat content, Amazon rainforest loss
rate, etc), and recent changes to them. Of these, 18 are reaching critical
levels. The COVID-19 lockdowns, which reduced transportation and
consumption levels, had very little impact on mitigating or reversing these
trends. The authors say only profound changes in human behavior can meet these
challenges and emphasize the need to move beyond the idea that global heating
is a stand-alone emergency, and is one facet of the worsening environmental
crisis. This necessitates the need for transformational system changes and to
focus on the root cause of these crises, the vast human overexploitation of
the earth, rather than just addressing symptom relief. They point to six areas
where fundamental changes need to be made:
(2) short-lived air pollutants — slashing black carbon (soot), methane, and hydrofluorocarbons;
(3) nature — restoring and permanently protecting Earth's ecosystems to store and accumulate carbon and restore biodiversity;
(4) food — switching to mostly plant-based diets, reducing food waste, and improving cropping practices;
(5) economy — moving from indefinite GDP growth and overconsumption by the wealthy to ecological economics and a circular economy, in which prices reflect the full environmental costs of goods and services; and
(6) human population — stabilizing and gradually reducing the population by providing voluntary family planning and supporting education and rights for all girls and young women, which has been proven to lower fertility rates.