Dr.Modha from IBM claims that for modern computers to ever simulate the human brain or even that of a rat, its architecture must be changed.
He states that the brains of mammals are not neural networks, but synaptic networks. The comparison he gave between how computers can try to simulate the computing power of a rat brain are astonishing.
Let's look at the power of a rat brain. It's amazing. 50 milliwatts, it's a very tiny thing. The rat brain cortex is about six square centimeters. Whereas we need 16 racks of supercomputers, 32,768 processors, 8 terabytes of main memory, half a megawatt of power. So really the computing architecture that we are pursuing as human kind and computing architecture that the brain has evolved are very different. It really is a meaningful question not only whether can we learn about cognition from the mind and the brain, but can we learn about the novel architectures for computing?
...today, as digital and physical worlds collide, at the boundary, there's a vertibale Tsunami of data literally we are swimming in sensors and drowning in data. When confronted with this challenge, today's computers are really literally overgrown, highly power-expensive calculators. Dr. Modha IBMHow Do We Power Moore's Law Into The Future?
To underscore this problem, in July 31, 2011's New York Times, there was an article published, by John Markoff, titled, Progress Hits Snag: Tiny Chips Use Outsize Power. The article describes something called "dark silicon," an expression used by Dr. William Dally from Nvidia who interviewed for this article explains,
"It is true that simply taking old processor architectures and scaling them won’t work anymore,” said William J. Dally, chief scientist at Nvidia, a maker of graphics processors, and a professor of computer science at Stanford University. “Real innovation is required to make progress today."Dark silicon is further elucidated.
...the most advanced microprocessor chips have so many transistors that it is impractical to supply power to all of them at the same time. So some of the transistors are left unpowered — or dark, in industry parlance — while the others are working. The phenomenon is known as dark silicon.There was a paper cited from the International Symposium on Computer Architecture, authored by professors from the University of Texas, University of Wisconsin, and Microsoft, titled Dark Silicon and the End of Multicore Scaling. The abstract further emphasizes the point,
Since 2005, processor designers have increased core counts to exploit Moore’s Law scaling, rather than focusing on single-core performance. The failure of Dennard scaling, to which the shift to multicore parts is partially a response, may soon limit multicore scaling just as single-core scaling has been curtailed. This paper models multicore scaling limits by combining device scaling, single-core scaling, and multicore scaling to measure the speedup potential for a set of parallel workloads for the next ﬁve technology generations. For device scaling, we use both the ITRS projections and a set of more conservative device scaling parameters. To model singlecore scaling, we combine measurements from over 150 processors to derive Pareto-optimal frontiers for area/performance and power/performance. Finally, to model multicore scaling, we build a detailed performance model of upper-bound performance and lowerbound core power. The multicore designs we study include singlethreaded CPU-like and massively threaded GPU-like multicore chip organizations with symmetric, asymmetric, dynamic, and composed topologies. The study shows that regardless of chip organization and topology, multicore scaling is power limited to a degree not widely appreciated by the computing community. Even at 22 nm (just one year from now), 21% of a ﬁxed-size chip must be powered o, and at 8 nm, this number grows to more than 50%. Through 2024, only 7.9 average speedup is possible across commonly used parallel workloads, leaving a nearly 24-fold gap from a target of doubled performance per generation.Dr. Babak Faslsafi from the Ecole polytechnique federale de Lausanne, France, stated in a lecture, titled, Dark Silicon and Its Implication on Server Design that to keep up with Moore's Law there will have to be a reduction of energy consumption on the parts of computer chips by 100 times! If you wish to see his lecture, we present it to you. If you cannot see the embedded video, here is the link: http://bit.ly/n6J2Pr.
If one has noticed that for years now the speed of chips have hovered around three gigahertz. This is because to drive them faster would cause them to melt with our present technology. This pessimistic view however of the future speed and power of computer chips is not universally held among computer scientists. Dr. Dally cited the present inefficient designs as a good place where speed increases can be increased without much higher demands for power. The times article stated that, "in the future, Intel computers will have different kinds of cores optimized for different kinds of problems, only some of which require high power."
IBMs' Cognitive Computers
According Dr. Modha, SyNAPSE was the "brainchild" of a Dr. Todd Hylton from Stanford University's Department of Applied Physics, based on his work on neuromorphic electronics. He serves as the program manager for DARPA's SyNAPSE project. Of course to this kind of work a supercomputer is required. IBM is using its own Blue Gene/L supercomputer (this one ie soon to be superseded by the Blue Gene/Q system due to come out in 2012). The Blue Gene that was used by IBM is located at the Laurence Livermore National Laboratories. It uses 147,586 CPUs and 144 terabytes of memory. We have made some videos available to you concerning this supercomputer. If you cannot see the embedded video, here is the link: http://bit.ly/qiKddo.
Dr. Modha stated in an article in 2008 on embarking on SyNAPSE's Phase 0 program this,
By seeking inspiration from the structure, dynamics, function, and behavior of the brain, the IBM-led cognitive computing research team aims to break the conventional programmable machine paradigm. Ultimately, the team hopes to rival the brain’s low power consumption and small size by using nanoscale devices for synapses and neurons. This technology stands to bring about entirely new computing architectures and programming paradigms. The end goal: ubiquitously deployed computers imbued with a new intelligence that can integrate information from a variety of sensors and sources, deal with ambiguity, respond in a context-dependent way, learn over time and carry out pattern recognition to solve difficult problems based on perception, action and cognition in complex, real-world environments.The goal is to use nano scale devices to emulate the brain's neurons and synapses. In his 2008 article, Dr. Modha further explained the initial goals of this DARPA funded $4.9 million program:
IBM’s proposal, “Cognitive Computing via Synaptronics and Supercomputing (C2S2),” outlines groundbreaking research over the next nine months in areas including synaptronics, material science, neuromorphic circuitry, supercomputing simulations and virtual environments. Initial research will focus on demonstrating nanoscale, low power synapse-like devices and on uncovering the functional microcircuits of the brain.
We include a video by Dr. Modha done at Phase 0 back in 2008. The video is cut off in the end, but still valuable to inform us. If you cannot see the embedded video, here is the link: http://youtu.be/1y0NOa-yjr8.
We present to you the presentation that Dr. Modha feels is his best on this subject. If you cannot see the embedded video, here is the link: http://bit.ly/nmUNNt.
In our next installment of this series, we shall try to cover in more detail the phases of this project and the progress that has been made.