If you don’t know what the robot apocalypse is, Let me explain it to you: An AI apocalypse is a hypothetical scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, with computers or robots effectively taking control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control. Robot rebellions have been a major theme throughout science fiction for many decades though the scenarios dealt with by science fiction are generally very different from those of concern to scientists.

The Theory Of Singularity

The theory of singularity is the hypothesis that the invention of artificial super intelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.

According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a “runaway reaction” of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful super intelligence that would, qualitatively, far surpass all human intelligence.

Stanislaw Ulam reports a discussion with John von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”. Subsequent authors have echoed this viewpoint.

I. J. Good’s “intelligence explosion” model predicts that a future superintelligence will trigger a singularity.

Emeritus professor of computer science at San Diego State University and science fiction author Vernor Vinge said in his 1993 essay The Coming Technological Singularity that this would signal the end of the human era, as the new super intelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.

Four polls, conducted in 2012 and 2013, suggested that the median estimate was a 50% chance that artificial general intelligence (AGI) would be developed by 2040–2050.

In the 2010s, public figures such as Stephen Hawking and Elon Musk expressed concern that full artificial intelligence could result in human extinction. The consequences of the singularity and its potential benefit or harm to the human race have been hotly debated.