Saturday, August 20, 2011

Beyond von Neumann: IBM's Cognitive Computers 1

IBM's new biological model of a computer is here.  How does it affect the von Neuman model that is the basis for all present computers?

Recently, IBM made an announcement that they had developed "cognitive computers."  You might ask what is a cognitive computer even though the term seems obvious?

A cognitive computer is a computer that can think and understand the world to the same or similar extent as the human brain.  An IBM article on this subject by Dharmendra S. Modha, Manager of Cognitive Computing at IBM Alamaden Research Center, described it this way,
Making sense of real-time input flowing in at a dizzying rate is a Herculean task for today's computers, but would be natural for a brain-inspired system. Using advanced algorithms and silicon circuitry, cognitive computers learn through experiences, find correlations, create hypotheses, and remember—and learn from—the outcomes.
...the human brain—the world's most sophisticated computer—can perform complex tasks rapidly and accurately using the same amount of energy as a 20 watt light bulb in a space equivalent to a 2 liter soda bottle. IBM
This project is being funded by DARPA, a project, now in Phase 2,  called Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE), which attempts to,
...reproduce the structure and architecture of the brain - the way its elements receive sensory input, connect to each other adapt these connections, and transmit morot output - the SyNAPSE project models computing systems that emulate the brain's computing efficiency, size and power usage without being programmed.
Cognitive Model
This program is being co-funded by the National Science Foundation to the tune of $21 million.  Hewlett Packard was awarded a contract for the project, which in turn hired subcontractors to develop the idea.

The idea is to first build robots with the computational intelligence of rats, then cats, and finally humans.  But there is more, they want to make it small using nanotechnology.  So to create this computer will require a congruence of neuroscience, nanotechnology and supercomputing.  IBM is not the only organization working on this DARPA project.   The other organizations and people are:
  1. H.S. Philip Wong and Brian Wandell at Stanford University.
  2. Rajit Manohar at Cornell University
  3. Stefano Fusi at Columbia University Medical Center
  4. Giulio Tononi at the University of Wisconsin-Madison
  5. Christopher Kello, University of California, Merced
  6. Greg Snider, Department of Cognitive and Neural Systems, Boston University
So why do we connect the development of this new computer chip to von Neumann?  It is simple.  It is because Dr. Modha does.  He makes this interesting quote, "This is a major initiative to move beyond the von Neumann paradigm that has been ruling computer architecture for more than half a century."  Of course, to understand why this new chip would be moving beyond the von Neumann model, we must first understand that model (This will be the subject of the second part of this series).  Kurzweil news gives an excellent explanation which we quote in full.
IBM is combining principles from nanoscience, neuroscience, and supercomputing as part of a multi-year cognitive computing initiative. IBM’s long-term goal is to build a chip system with ten billion neurons and hundred trillion synapses, while consuming merely one kilowatt of power and occupying less than two liters of volume. While they contain no biological elements, IBM’s first cognitive computing prototype chips use digital silicon circuits inspired by neurobiology to make up a “neurosynaptic core” with integrated memory (replicated synapses), computation (replicated neurons) and communication (replicated axons). IBM has two working prototype designs. Both cores were fabricated in 45 nm SOI­CMOS and contain 256 neurons. One core contains 262,144 programmable synapses and the other contains 65,536 learning synapses. The IBM team has successfully demonstrated simple applications like navigation, machine vision, pattern recognition, associative memory and classification. IBM’s overarching cognitive computing architecture is an on-chip network of lightweight cores, creating a single integrated system of hardware and software. It represents a potentially more power-efficient architecture that has no set programming, integrates memory with processor, and mimics the brain’s event-driven, distributed and parallel processing.
In case that explanation was not crystal clear, we provide some videos to explain the purposes of this type of computer and this new approach. If you cannot see the embedded videos, here is the link: http://bit.ly/nLBNXS.

No comments:

AI & MEDICINE


 See These Pages: FUTURISM TECH TRENDS SINGULARITY SCIENCE CENSORSHIP SOCIAL NETWORKS eREADERS MOBILE DEVICES 
 Coming soon.