|Ray Kurzweil speaking about Technological Singularity|
There are many types of "singularity." The first person to coin the term "technological singularity" was Irving John Good (1916-2009). We will let him define it in his own words,
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. **
So the point at which this first ultra-intelligent machine is made is the point when singularity occurs. Good was a consultant in Stanley Kubrick's famous landmark film, 2001 A Space Odyssey. In it the super intelligent computer the Hal 9000 supercomputer turned against its crew because of fears that they would shut it off after losing control of it. Here is the scene where the last remaining astronaut Dave is determined to turn the computer off no matter what.
The second person to speak of technological singularity was Vernor Vinge, a professor of Mathematics, Computer Science at the San Diego State University. Vinge believed that once computers reached above human intelligence our race would no longer be able to predict the actions of these more intelligent computerized entities. This singularity when it occurs, will appear to happen suddenly surprising most. But in reality, this was an exponential wave that had been occurring for a while. It appears sudden because once a "tipping point" is passed the change in progress is dramatic. Here are some of his charts illustrating this principle:
Here is a lengthy, but detailed explanation by Kurzweil himself.Some have argued that this singularity when it arrives may not be a positive event for the human race. Kurzweil chooses to be an optimist but others disagree. Even some of the coiners of this phrase, before Kurzweil were not sure of how positive an event this would be. In a report published in 2008 by the Future of Humanity Institute, Oxford University it was theorized how many would die from technological advancements that got out of control. An older paper published in 2002 in the Journal of Evolution and Technology by Nick Bostrom has a striking abstract:
Because of accelerating technological progress, humankind may be rapidly approaching a critical phase in its career. In addition to well-known threats such as nuclear holocaust, the prospects of radically transforming technologies like nanotech systems and machine intelligence present us with unprecedented opportunities and risks. Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges. In the case of radically transforming technologies, a better understanding of the transition dynamics from a human to a “posthuman” society is needed. Of particular importance is to know where the pitfalls are: the ways in which things could go terminally wrong. While we have had long exposure to various personal, local, and endurable global hazards, this paper analyzes a recently emerging category: that ofexistential risks". These are threats that could cause our extinction or destroy the potential of Earth-originating intelligent life. Some of these threats are relatively well known while others, including some of the gravest, have gone almost unrecognized. Existential risks have a cluster of features that make ordinary risk management ineffective. A final section of this paper discusses several ethical and policy implications. A clearer understanding of the threat picture will enable us to formulate better strategies."A post-human society??" Could this singularity come to that? We think it could. But as in the case on our blog about The Devicification of America, it does not matter. We are headed toward that world at a very fast pace. A technological tsunami is headed our way. We will, like we always have as a race, have to adapt and evolve. We suppose in this regard we share Kurzweil's optimism. We WILL SURVIVE and THRIVE in a world of our own creation. What do you think?? Speak your mind through the comment bar.