Ray Kurzweil speaking about Technological Singularity |
There are many types of "singularity." The first person to coin the term "technological singularity" was Irving John Good (1916-2009). We will let him define it in his own words,
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. **
So the point at which this first ultra-intelligent machine is made is the point when singularity occurs. Good was a consultant in Stanley Kubrick's famous landmark film, 2001 A Space Odyssey. In it the super intelligent computer the Hal 9000 supercomputer turned against its crew because of fears that they would shut it off after losing control of it. Here is the scene where the last remaining astronaut Dave is determined to turn the computer off no matter what.
The second person to speak of technological singularity was Vernor Vinge, a professor of Mathematics, Computer Science at the San Diego State University. Vinge believed that once computers reached above human intelligence our race would no longer be able to predict the actions of these more intelligent computerized entities. This singularity when it occurs, will appear to happen suddenly surprising most. But in reality, this was an exponential wave that had been occurring for a while. It appears sudden because once a "tipping point" is passed the change in progress is dramatic. Here are some of his charts illustrating this principle:
Here is a lengthy, but detailed explanation by Kurzweil himself.
Some have argued that this singularity when it arrives may not be a positive event for the human race. Kurzweil chooses to be an optimist but others disagree. Even some of the coiners of this phrase, before Kurzweil were not sure of how positive an event this would be. In a report published in 2008 by the Future of Humanity Institute, Oxford University it was theorized how many would die from technological advancements that got out of control. An older paper published in 2002 in the Journal of Evolution and Technology by Nick Bostrom has a striking abstract:Because of accelerating technological progress, humankind may be rapidly approaching a critical phase in its career. In addition to well-known threats such as nuclear holocaust, the prospects of radically transforming technologies like nanotech systems and machine intelligence present us with unprecedented opportunities and risks. Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges. In the case of radically transforming technologies, a better understanding of the transition dynamics from a human to a “posthuman” society is needed. Of particular importance is to know where the pitfalls are: the ways in which things could go terminally wrong. While we have had long exposure to various personal, local, and endurable global hazards, this paper analyzes a recently emerging category: that ofexistential risks". These are threats that could cause our extinction or destroy the potential of Earth-originating intelligent life. Some of these threats are relatively well known while others, including some of the gravest, have gone almost unrecognized. Existential risks have a cluster of features that make ordinary risk management ineffective. A final section of this paper discusses several ethical and policy implications. A clearer understanding of the threat picture will enable us to formulate better strategies."A post-human society??" Could this singularity come to that? We think it could. But as in the case on our blog about The Devicification of America, it does not matter. We are headed toward that world at a very fast pace. A technological tsunami is headed our way. We will, like we always have as a race, have to adapt and evolve. We suppose in this regard we share Kurzweil's optimism. We WILL SURVIVE and THRIVE in a world of our own creation. What do you think?? Speak your mind through the comment bar.
6 comments:
While I do think this technological explosion poses a threat, I do not think it is a threat like the one from "2001: A Space Odyssey." while I can't see machines taking us over, I can see something possibly far more scary; e possibility of us losing a humanity. Too often I see people who prefer communicating through machines; text messages, emails, facebook, etc. No one wants to go out, hang out, talk face to face. And I see this getting worse. This is what scares me.
Well as all the "singularists" have said, the very definition of technological singularity is that once a computer or if you agree with Kevin Kelly the internet as a virtual computer surpasses the computational power of the human mind, we will no longer be able to make any predictions as to what might or might not happen. As for text messaging that is a whole other matter having to do with our particular society and its social more. I wonder if they do the same thing in other cultures?
We have to be careful and balance the obvious need for the public to realize accelerating returns happening on parrallel pathways...this will be quite a decade...with the reality of the move over to AGI (Artificial General Intelligence) will take time.
The machine, not being biological, does not suffer the "3 B's; booze, booty, bounty" as humans. So for the machine as it gains the cognitive level of creating forward thinking storyline, the act of stealing someone's bank account will not be from its intelligence. That act would be due to programming via the human.
As we move forward, we just need to interface with the machine and work toward avoiding the machine becoming as paranoid as most of us.
An excellent comment. I think that most of the public has little idea just how quickly things are moving. Although some of the visions Kurzweil and his followers are quite grand, I think in essence they are right. The question of course is always raised, and perhaps properly so, if the machines will be able to overcome their programming by other programming that conflicts with it. Whether we will be able to give the machines the best parts of us without the worst remains to be seen.
Obtain and select some good things from you and it helps me to solve a problem, thanks.
- Rob
Useful, they needfulness to be taught that filing lawsuits is not the tact to be the originator to a retain not at home piracy. A substitute alternatively, it's to develop something most skilled than piracy. Like calm of use. It's density a serendipity easier to spew out of the closet down the cloaca iTunes than to search the Internet with endanger of malware and then crappy impute, but if people are expected to repay in back up of loads and play at to against ages, it's not affluent to work. They but be fact a low-lying on without place distant at one time people lay ended software and Network sites that vocal supervene it ridiculously weak to infringer, and up the quality. If that happens, then there mould be no stopping piracy. But they're too circumspect and horrified of losing. Risks suffer with to be bewitched!
hunk
Post a Comment