Will a singularity happen? Will there be many? Will there be none??
This interview was conducted through Facebook over a period of weeks with Peter Rothman, a computer programmer founder of Dangerous Minds LLC, specializing in biometrics, mathematics, streaming media, virtual reality, simulation, text analysis, data visualization, and artificial intelligence. As per our other articles of this nature, his words will be in italics. Anny editorial annotations for the purposes of clarification or enrichment, will be in non-italic orange type.
This interview was conducted through Facebook over a period of weeks with Peter Rothman, a computer programmer founder of Dangerous Minds LLC, specializing in biometrics, mathematics, streaming media, virtual reality, simulation, text analysis, data visualization, and artificial intelligence. As per our other articles of this nature, his words will be in italics. Anny editorial annotations for the purposes of clarification or enrichment, will be in non-italic orange type.
See These Pages:
FUTURISM
TECH TRENDS
SINGULARITY
SCIENCE
CENSORSHIP
SOCIAL NETWORKS
eREADERS
MOBILE DEVICES
Early Singularity
A few words on my own personal story...in the eighties I worked in the DoD R&D world, initially at Hughes Aircraft Company where I worked on the F/A-18 and B-2 radar systems as a young engineer. I then joined a small company TAU Corporation that was doing some pretty interesting work at that time.
My initial hire was to work on something called the Sensor Manager, a system for automating control of inboard sensing systems for a tactical fighter aircraft. Around 1990, I had won some contracts extending these ideas to the field of Electronic Warfare and management of ECM and ECCM systems.
"...ignorance of how to use new ideas stockpiles exponentially..."
Marshall McLuhan
There was at this time an important SBIR project at went to Phase II called the Intelligent Threat Management program and this was a feeder to a larger project we were to work on with Loral Electronics called ADARS [Advanced Defense Avionics Response Strategy]. It was during the prep meetings for this project that I first heard about the singularity idea. This was about 1990 two years or so before Raymond Kurzweil's first book. The idea was well recognized at the time in the form of Moore's Law and further there were specific intelligence estimates of computing leads enjoyed by the U.S. over the U.S.S.R.
At that time. It turns out this was based on a CIA report from 1986, but I did not know that at the time. That report is now available under the FOIA. Anyway, at this time I was asked specifically what I might be able to do with an embedded computer with the computing power of the entire Soviet Union. Consider the import,, the idea was to have a small computer that had more power than the entire computing resources of a powerful nation state. The ideas for this became the focus of the Intelligent Management SBIR and my contribution to ADARS.
In 1990 or so, I am sitting in a Kosher Chinese restaurant in Yonkers NY and I am posed the question related to my ideas on building state machine models of Integrated Air Defense Systems (IADS) what we might do with a computer that had more power than the entire computing power of the USSR? This was when the SCU (Soviet Computing Unit) mentioned in the CIA paper was first introduced to me and we had a subsequent discussion of implications of exponential computing growth in various aspects of this application which was suppression and penetration of complex air defense networks.
The essential idea was that you could build these models and then use them to predict the behavior of the entire network to various stimuli. People were sort of skeptical of the idea, after all how could a complex network of such man/machine systems be predicted? The idea was essentially to build a probabilistic state machine model and explore the predicted future state tree, which sort of obviously, gets very huge very quickly. Hence the need for an airborne supercomputer.
Vax 3100 |
Another application discussed was in cryptography. If I have a significant computing advantage, I can generate codes that someone with less resources can not crack. This is just due to the issues of computational complexity and time complexity specifically. If I have one computer with say 100 SCUs, then there is no way that the USSR can break a code I generate with this machine. At least not by using conventional computational approaches and assuming I have not made a mistake in the design of my system etc.
This is an idea from algorithmic information theory. For any finite computer, there is a largest program I can run in that machine. Say this program in k-bits long. If I have a larger computer, I can now run a program that is K bits long where K >>k. We can ask what the output of a K bit program looks like to the computer that can only k bit programs...the answer is it looks "random".
Another application is in game theory, i.e. the iterated prisoner's dilemma. [We present a short video which illustrates this paradox. If you cannot see the embedded video, here is the link: http://youtu.be/boBmA0ADgVg.]
If I have a more powerful computer than my opponent I may be able to detect a pattern in his moves that he is unaware of or unable to remove. Put this together with Moore's Law and you see where it goes.
Norbert Wiener |
[Wiener’s cybernetics emerged from the world of automation, military command, and computing during and after World War II. Wiener’s own work on control systems during the war existed within a set of projects and a technical agenda which aimed to automate human performance in battle through a tight coupling of people and machines. Indeed before Wiener’s cybernetics, American technology was already suffused with what would later be called “cybernetic” ideas. Several strong pre-war traditions of feedback mechanisms — including regulators and governors, industrial process controls, military control systems, feedback electronics, and a nascent academic discipline of control theory — suggests a broader and more gradual convergence of communications and control than the strict “Wienerian” account. [6] Servo engineers turned to techniques common in the telephone network to characterize the behavior of powerful feedback devices. Radar engineers adapted communications theory to deal with noise in tracking. Human operators were always necessary but problematic components of automatic control systems. Military technologists had wrestled with the notion of prediction since at least the turn of the century. These were but a few of the features of the technological terrain onto which Norbert Wiener stepped in 1940 when he began working on control systems.]1This was of course before my time, but I understand that Wiener and von Neumann may have had some conflict over this issue with Wiener seeing the notion of singularity and machine intelligence as potentially evil/dangerous, and von Neumann and Teller et al pushing forward anyway and suppressing dissent. [Wiener's own comments, seem to bear this is out as quoted by Bynum (2005), who in turns quotes from Wiener's 1954 book, The Human Use of Human Beings:
[A person should] not leap in where angels fear to tread, unless he is prepared to accept the punishment of the fallen angels. Neither will he calmly transfer to the machine made in his own image the responsibility for his choice of good and evil, without continuing to accept a full responsibility for that choice. (Wiener, 1954, p. 184) On the other hand, the machine . . . which can learn and can make decisions on the basis of its learning, will in no way be obliged to make such decisions as we should have made, or will be acceptable to us. For the man who is not aware of this, to throw the problem of his responsibility on the machine, whether it can learn or not, is to cast his responsibility t o the winds, and to find it coming back seated on the whirlwind. (Wiener, 1954, p. 185)]Also I want to emphasize that my story isn't really unique. The idea was in popular currency by 1990 in the classified world, and was probably discussed by many people in many different compartmentalized projects. I have no idea about this for obvious reasons, but there is no reason to suppose my little project was notably special or unique. This was two years before Raymond Kurzweil published his first book [The Age Of Spiritual Machines] mentioning the singularity concept.
I think the stories are clearly connected. We have not gotten into the Strategic Computing Initiative, but this was pretty well known and clearly shows that allot of people were talking about exponential computing advantages by this point in time. The Japanese Fifth Generation computing initiative was another related effort. Supposedly these all "failed". [In light of this comment it is interesting to note the comments of Drew McDermott who in 1985 at the opening of the "The Dark Ages of AI: A Panel Discussion at AAAI-84" stated,
Suppose that five years from now the strategic computing initiative collapses miserably as autonomous vehicles fail to roll. The fifth generation turns out not to go anywhere, and the Japanese government immediately gets out of computing. Every startup company fails...And there's a big backlash so that you can't get money for anything connected with AI. Everybody hurriedly changes the names of their research projects to something else."
There are a number of things that the conventional singularity thinking misses as described and propagated by Kurzweil and his supporters. I will mention some of these here, I don't mean this to be a complete list of all the missed point, fallacies, etc. I am sure others can add to my list.
No comments:
Post a Comment