Monday, August 22, 2011

Beyond von Neumann: IBM's Cognitive Computers 2


John von Neumann designed the basic architecture of all modern personal computer back in the 1940s and 1950s.  Is it not time to move on?
Through engineering breakthroughs, this architecture has been enhanced and tricked into producing faster and faster results.  This is a testimony to the genius of certain electrical engineers.  But there is a wall that limits order-of-magnitude speed increases.  For this we will need a cognitive computer - one that can think like the human brain.

Who was John von Neumann?  Any mathematician knows him well, but members of the general public may not.  For these, we include a short video on his life done by an admirer.  If you cannot see the embedded video, here is the link: http://youtu.be/kklb1J6Ij_U.


John von Neumann and others during the 1940s are responsible for what the computer is today.  By that we mean that he and others laid out the basic structural design of the computer.  A very concise explanation of the limits of the von Neumann computer architecture is found in the SyNAPSE project's own website.
The vast majority of current-generation computing devices are based on the Von Neumann architecture. This core architecture is wonderfully generic and multi-purpose, attributes which enabled the information age. The Von Neumann architecture comes with a deep, fundamental limit, however. A Von Neumann processor can execute an arbitrary sequence of instructions on arbitrary data, enabling reprogrammability, but the instructions and data must flow over a limited capacity bus connecting the processor and main memory. Thus, the processor cannot execute a program faster than it can fetch instructions and data from memory. This limit is known as the “Von Neumann bottleneck.”
Surely there must be a less primitive way of making big changes in the store than by pushing vast numbers of words back and forth through the von Neumann bottleneck. Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. wikipedia
The classic paper that produced our modern computer design was published in 1946 by von Neumann,  Arthur Burks and Hermann Goldstine, titled, Preliminary Discussion Of The Logical Design Of An Electronic Computing System.  We do not wish to get too technical but if the reader will bear with us, we shall describe the basic architecture of a von Neumann computer.  An excellent summary of a von Neumann computer is found in an essay by H. Norton Riley, professor of the Computer Science Department, California State Polytechnic University in 1987, titled The von Neumann Architecture of Computer Systems.  We shall quote extensively from it,
To von Neumann, the key to building a general purpose device was in its ability to store not only its data and the intermediate results of computation, but also to store the instructions, or orders, that brought about the computation. In a special purpose machine the computational procedure could be part of the hardware. In a general purpose one the instructions must be as changeable as the numbers they acted upon. Therefore, why not encode the instructions into numeric form and store instructions and data in the same memory? This frequently is viewed as the principal contribution provided by von Neumann's insight into the nature of what a computer should be.
 The genius of the von Neumann architecture back then is the weakness of today's computers.  There four basic weaknesses in the von Neumann architecture that inhibits modern computers from achieving greater speed and power.  The first one is,
...the fact that instructions and data are distinguished only implicitly through usage. As he points out, the higher level languages currently used for programming make a clear distinction between the instructions and the data and have no provision for executing data or using instructions as data.
The second is, "...that the memory is a single memory, sequentially addressed." The third weakness "...is that the memory is one-dimensional."
...these are in conflict with our programming languages. Most of the resulting program, therefore, is generated to provide for the mapping of multidimensional data onto the one dimensioned memory and to contend with the placement of all of the data into the same memory.
As for the fourth weakness, the problem,
...is that the meaning of the data is not stored with it. In other words, it is not possible to tell by looking at a set of bits whether that set of bits represents an integer, a floating point number or a character string. In a higher level language, we associate such a meaning with the data, and expect a generic operation to take on a meaning determined by the meaning of its operands.
The implications of these weaknesses are significant.
One facet of this is the fundamental view of memory as a "word at a time" kind of device. A word is transferred from memory to the CPU or from the CPU to memory. All of the data, the names (locations) of the data, the operations to be performed on the data, must travel between memory and CPU a word at a time. Backus [1978] calls this the "von Neumann bottleneck." As he points out, this bottleneck is not only a physical limitation, but has served also as an "intellectual bottleneck" in limiting the way we think about computation and how to program it.
For those who would like to learn about this visually, we include some videos.  If you cannot see the embedded video, here is the link: http://bit.ly/om4AN3.


 In our next installment of this series we shall explain how these new cognitive computers revolutionize and surpass von Neumann computers.

4 comments:

Monica Anderson said...

The assertions made in the discussion about bottlenecks are incorrect.

"instructions and data are distinguished only implicitly through usage". This is not a problem in any programming language I know and is not a problem for any CPU or VM architecture I know. The compiler handles this.

"the memory is a single memory, sequentially addressed". This is not a problem either. In fact, it's a feature.

"the memory is one-dimensional". Same thing. Not a problem. The compiler allows not only N-dimensional arrays but heap-allocated object-oriented storage.

"the meaning of the data is not stored with it". Not a problem. The debugger can get this information from a symbol table stored with the program. The machine code does the right thing with its data since the compiler knew what each datum was at compile time. Further, in languages like Smalltalk, LISP and to some extent Python (in dynamically typed languages) this is even blatantly incorrect since type information is available at runtime. If this were an important feature that everyone wanted, then Smalltalk and LISP would have remained popular.

I expect to have a lot to say about the IBM SyNAPSE effort also. It solves nothing. It is a non-solution to a non-problem, and even for brain-like architectures, its architecture is blatantly wrong. It is designed for an incorrect view of the brain - the brain is not fully interconnected, while the chip is, leading to a massive waste of resources. Further, it doesn't leverage what we've learned about Connectionist architectures since 1985 (!) so the chip is basically a throwback to 1970's thinking. And finally, in any practical application of brain-like computing a conventional von Neumann architecture machine (at equivalent hardware cost but appropriately configured) will run circles around anything built with a set of SyNAPSE style chips. I'll get to details after I see your next post.

Guillermo Santamaria said...

Monica:
As always, your comments are welcome. I shall consider your points and look forward to any interactions you have with this article and the next one. Thanks for your valuable insights!

Anonymous said...

Your point is valueble for me. Thanks!

My blog:
dsl flatrate vergleich und dsl anbieter

chumbly said...

Neural networks have significant advantages in recognising complex data patterns which may make them useful adjuncts to Von Neumann architectures.

AI & MEDICINE


 See These Pages: FUTURISM TECH TRENDS SINGULARITY SCIENCE CENSORSHIP SOCIAL NETWORKS eREADERS MOBILE DEVICES 
 Coming soon.