Read more!
The singularity is coming.  If computers are improving at an amazing pace, doubling in power and capacity every 18 months, then when will they finally overtake the human brain?  When this singularity happens, will the world be transformed radically?  Explore the issues here.

Click here for the articles on singularity.

Singularity Websites

Origins of Singularity
Nikolai Kardashev
Nikolai Kardashev proposed in 1964 that civilizations could be measured by their use of energy.  The higher the use of energy the more sophisticated.  He proposed this in a paper presented in 1984 entitled, On the Inevitability and the Possible Structures of Civilizations in The Search for extraterrestrial life: Recent developments.  Kardashev stated:
"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."Eliezer Yudkowsky
Since civilizations face always problems that require continuously greater activity, it is likely that super civilizations will undertake activities and construct structures of a very large scale.  Properties and means of detections of such superstructures and activities are discussed.
According to this model we are a type 0 organization not even to Type 1 yet.  Once the technological singularity arrives we would achieve quickly type 1 status.

Irving John Good
Alan Turing
Many believe that the idea of technological singularity began with Irving John Good.  He was a consultant to Stanley Kubrick, the director of the film 2001: A Space Odyssey on the topic of supercomputers, specifically the Hal 9000.  Good was involved in the decryption project in Bletchley Park with legendary Alan Turing during World War II.
designed by Russian mathematician Nikolai Kardashev in 1964
In 1965, Good wrote an article for the magazine New Scientist, entitled, Speculations Concerning the First Ultraintelligent Machine.  In that article he made this statement,
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
"God is what mind becomes when it has passed beyond the scale of our comprehension." Freeman Dyson
But there are earlier exponents of this view.  Robert Thornton a religious writer in Michigan, published a newspaper The Primitive Expounder which like many others of the time in America, discussed modern developments and interpreted them through the Bible, wrote concerning the newly invented mechanical calculator,
...such machines, by which the scholar may, by turning a crank, grind out the solution of a problem without the fatigue of mental application, would by its introduction into schools, do incalculable injury, But who knows that such machines when brought to greater perfection, may not think of a plan to remedy all their own defects and then grind out ideas beyond the ken of mortal mind!
Stanislaw Ulam
In 1958, Stanislaw Ulam described a conversation with John von Neumann, a world renowned mathematician, responsible for pivotal work in set theory, functional analysis, quantum mechanics, ergodic theory, continuous geometry, economics, game theory, computer science, numerical analysis, hydrodynamics and statistics,
One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.
The idea was further developed by Vernor Vinge who in the 1983 issue of Omni Magazine, who actually used the term singularity in regards to intelligent machines when he wrote,
We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time and the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between ... so that the world remains intelligible.
Vinge when on to write an article in 1993 entitled, The Coming Technological Singularity in which he concluded,
Which is the valid viewpoint? In fact, I think the new era is simply too different to fit into the classical frame of good and evil. That frame is based on the idea of isolated, immutable minds connected by tenuous, low-bandwith links. But the post-Singularity world _does_ fit with the larger tradition of change and cooperation that started long ago (perhaps even before the rise of biological life). I think there _are_ notions of ethics that would apply in such an era. Research into IA and high-bandwidth communications should improve this understanding. I see just the glimmerings of this now, in Good's Meta-Golden Rule, perhaps in rules for distinguishing self from others on the basis of bandwidth of connection. And while mind and self will be vastly more labile than in the past, much of what we value (knowledge, memory, thought) need never be lost. I think Freeman Dyson has it right when he says [8]: "God is what mind becomes when it has passed beyond the scale of our comprehension."
What is the Future for Humanity?
This is the BIG question.  There are two basic camps when it comes to this question.  The optimistic camp and the pessimistic camp.  It seems that so far everything that mankind creates always has a dual effect - positive and negative.  Why should we believe that these supercomputers independently intelligent from us should follow any different pattern?  If this technological singularity does arrive, it will inevitably lead to conflict between humans and these artificial super intelligences.

Critics of Technological Singularity
Steven Pinker
There are critics who do not believe that an age of technological singularity will arrive or materialize.  Some of these are famous people in the field of technology and artificial intelligence.  Names such as Marvin Minsky, Gordon Moore, and Steven Pinker can be found in this list.

Steven Pinker a well known author and thinker as well as professor of psychology at Harvard University is skeptical stating,
There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles--all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems.
Gordon Moore
Gordon Moore, co-founder of Intel,  also agrees with Pinker when he said,
I am a skeptic. I don't believe this kind of thing is likely to happen, at least for a long time. And I don't know why I feel that way. The development of humans, what evolution has come up with, involves a lot more than just the intellectual capability. You can manipulate your fingers and other parts of your body. I don't see how machines are going to overcome that overall gap, to reach that level of complexity, even if we get them so they're intellectually more capable than humans.
Douglas Hofstadter
Douglas Hofstadter director of the Center for Research on Concepts and Cognition at Indiana University states,
It might happen someday, but I think life and intelligence are far more complex than the current singularitarians seem to believe, so I doubt it will happen in the next couple of centuries. [The ramifications] will be enormous, since the highest form of sentient beings on the planet will no longer be human. Perhaps these machines--our 'children'--will be vaguely like us and will have culture similar to ours, but most likely not. In that case, we humans may well go the way of the dinosaurs.
Esther Dyson
Esther Dyson, an investor and evangelist for emerging technologies also echoes this skepticism of technological singularity when she says,
The singularity I'm interested in will come from biology rather than machines. We won't be building things; we'll be growing and cultivating them, and then they will grow on their own.
Kevin Kelly
Kevin Kelly, a brilliant writer and former editor of Wired Magazine as well as an author, critiqued Singularity as championed by Kurzweil in 2006 in a post in his blog The Technium.  Kelly, wrongly attributes the idea to Vinge.  Perhaps he was thinking of the source from which Kurzweil incorporated the idea into his thoughts. Although very respectful of Kurzweil's achievements, Kelly is skeptical.  He thinks many assumptions are made by Kurzweil which are not directly connected to artificial intelligence reaching the sophistication and power of the human mind.  Kelly believes that there have been many singularities in history and that some are occurring now.  He thinks all these singularities have been imperceptible and continuous.
As the next level of organization kicks in, the current level is incapable of perceiving the new level, because that perception must take place at the new level. From within our emerging global cultural, the coming phase shift to another level is real, but it will be imperceptible to us during the transition. Sure, things will speed up, but that will only hide the real change, which is a change in the rules of the game. Therefore we can expect in the next hundred years that life will appear to be ordinary and not discontinuous, certainly not cataclysmic, all the while something new gathers, until slowly we recognize that we have acquired the tools to perceive that new tools are present - and have been for a while.
He continues about the coming singularity,
In a thousand years from now, all the 11-dimensional charts at that time will show that "the singularity is near." Immortal beings and global consciousness and everything else we hope for in the future may be real and present but still, a linear-log curve in 3006 will show that a singularity approaches. The singularity is not a discreet event. It’s a continuum woven into the very warp of extropic systems. It is a traveling mirage that moves along with us, as life and the technium accelerate their evolution.
Marvin Minsky
Marvin Minsky, one of the pioneers of Artificial Intelligence talked of singularity at the TED conference in 2008.  If you cannot see the embedded video here is the link:

If you wish to read a good summary of these views you may go here.
List of Singularity Leaders

Hugo de Garis
Hugo de Garis Australian researcher in artificial intelligence and evolvable hardware, used genetic algorithms to evolve neural networks, believer in singularity

Raymond Kurzweil American author, inventor and futurist

Raymond Kurzweil