Successful CMOs achieve growth by leveraging technology. Join us for GrowthBeat Summit on June 1-2 in Boston
, where we'll discuss how to merge creativity with technology to drive growth. Space is limited. Request your personal invitation here
IBM has been shipping computers for more than 65 years, and it is finally on the verge of creating a true electronic brain.
Big Blue is announcing today that it, along with four universities and the Defense Advanced Research Projects Agency (DARPA), have created the basic design of an experimental computer chip that emulates the way the brain processes information.
IBM’s so-called cognitive computing chips could one day simulate and emulate the brain’s ability to sense, perceive, interact and recognize — all tasks that humans can currently do much better than computers can.
Dharmendra Modha (pictured below right) is the principal investigator of the DARPA project, called Synapse (Systems of Neuromorphic Adaptive Plastic Scalable Electronics, or SyNAPSE). He is also a researcher at the IBM Almaden Research Center in San Jose, Calif.
“This is the seed for a new generation of computers, using a combination of supercomputing, neuroscience, and nanotechnology,” Modha said in an interview with VentureBeat. “The computers we have today are more like calculators. We want to make something like the brain. It is a sharp departure from the past.”
If it eventually leads to commercial brain-like chips, the project could turn computing on its head, overturning the conventional style of computing that has ruled since the dawn of the information age and replacing it with something that is much more like a thinking artificial brain. The eventual applications could have a huge impact on business, science and government. The idea is to create computers that are better at handling real-world sensory problems than today’s computers can. IBM could also build a better Watson, the computer that became the world champion at the game show Jeopardy earlier this year.
We wrote about the project when IBM announced the project in November, 2008 and again when it hit its first milestone in November, 2009. Now the researchers have completed phase one of the project, which was to design a fundamental computing unit that could be replicated over and over to form the building blocks of an actual brain-like computer.
Richard Doherty, an analyst at the Envisioneering Group, has been briefed on the project and he said there is “nothing even close” to the level of sophistication in cognitive computing as this project.
This new computing unit, or core, is analogous to the brain. It has “neurons,” or digital processors that compute information. It has “synapses” which are the foundation of learning and memory. And it has “axons,” or data pathways that connect the tissue of the computer.
While it sounds simple enough, the computing unit is radically different from the way most computers operate today. Modern computers are based on the von Neumann architecture, named after computing pioneer John von Neumann and his work from the 1940s.
In von Neumann machines, memory and processor are separated and linked via a data pathway known as a bus. Over the past 65 years, von Neumann machines have gotten faster by sending more and more data at higher speeds across the bus, as processor and memory interact. But the speed of a computer is often limited by the capacity of that bus, leading some computer scientists to call it the “von Neumann bottleneck.”
With the human brain, the memory is located with the processor (at least, that’s how it appears, based on our current understanding of what is admittedly a still-mysterious three pounds of meat in our heads).
The brain-like processors with integrated memory don’t operate fast at all, sending data at a mere 10 hertz, or far slower than the 5 gigahertz computer processors of today. But the human brain does an awful lot of work in parallel, sending signals out in all directions and getting the brain’s neurons to work simultaneously. Because the brain has more than 10 billion neuron and 10 trillion connections (synapses) between those neurons, that amounts to an enormous amount of computing power.
IBM wants to emulate that architecture with its new chips.
“We are now doing a new architecture,” Modha said. “It departs from von Neumann in variety of ways.”
The research team has built its first brain-like computing units, with 256 neurons, an array of 256 by 256 (or a total of 65,536) synapses, and 256 axons. (A second chip had 262,144 synapses) In other words, it has the basic building block of processor, memory, and communications. This unit, or core, can be built with just a few million transistors (some of today’s fastest microchips can be built with billions of transistors).
Modha said that this new kind of computing will likely complement, rather than replace, von Neumann machines, which have become good at solving problems involving math, serial processing, and business computations. The disadvantage is that those machines aren’t scaling up to handle big problems well any more. They are using too much power and are harder to program.
The more powerful a computer gets, the more power it consumes, and manufacturing requires extremely precise and expensive technologies. And the more components are crammed together onto a single chip, the more they “leak” power, even in stand-by mode. So they are not so easily turned off to save power.
The advantage of the human brain is that it operates on very low power and it can essentially turn off parts of the brain when they aren’t in use.
These new chips won’t be programmed in the traditional way. Cognitive computers are expected to learn through experiences, find correlations, create hypotheses, remember, and learn from the outcomes. They mimic the brain’s “structural and synaptic plasticity.” The processing is distributed and parallel, not centralized and serial.
With no set programming, the computing cores that the researchers have built can mimic the event-driven brain, which wakes up to perform a task.
Modha said the cognitive chips could get by with far less power consumption than conventional chips.
The so-called “neurosynaptic computing chips” recreate a phenomenon known in the brain as a “spiking” between neurons and synapses. The system can handle complex tasks such as playing a game of Pong, the original computer game from Atari, Modha said.
Two prototype chips have already been fabricated and are being tested. Now the researchers are about to embark on phase two, where they will build a computer. The goal is to create a computer that not only analyzes complex information from multiple senses at once, but also dynamically rewires itself as it interacts with the environment, learning from what happens around it.
The chips themselves have no actual biological pieces. They are fabricated from digital silicon circuits that are inspired by neurobiology. The technology uses 45-nanometer silicon-on-insulator complementary metal oxide semiconductors. In other words, it uses a very conventional chip manufacturing process. One of the cores contains 262,144 programmable synapses, while the other contains 65,536 learning synapses.
Besides playing Pong, the IBM team has tested the chip on solving problems related to navigation, machine vision, pattern recognition, associative memory (where you remember one thing that goes with another thing) and classification.
Eventually, IBM will combine the cores into a full integrated system of hardware and software. IBM wants to build a computer with 10 billion neurons and 100 trillion synapses, Modha said. That’s as powerful than the human brain. The complete system will consume one kilowatt of power and will occupy less than two liters of volume (the size of our brains), Modha predicts. By comparison, today’s fastest IBM supercomputer, Blue Gene, has 147,456 processors, more than 144 terabytes of memory, occupies a huge, air-conditioned cabinet, and consumes more than 2 megawatts of power.
As a hypothetical application, IBM said that a cognitive computer could monitor the world’s water supply via a network of sensors and tiny motors that constantly record and report data such as temperature, pressure, wave height, acoustics, and ocean tide. It could then issue tsunami warnings in case of an earthquake. Or, a grocer stocking shelves could use an instrumented glove that monitors sights, smells, texture and temperature to flag contaminated produce. Or a computer could absorb data and flag unsafe intersections that are prone to traffic accidents. Those tasks are too hard for traditional computers.
Synapse is funded with a $21 million grant from DARPA, and it involve six IBM labs, four universities (Cornell, the University of Wisconsin, University of California at Merced, and Columbia) and a number of government researchers.
For phase 2, IBM is working with a team of researchers that includes Columbia University; Cornell University; University of
California, Merced; and University of Wisconsin, Madison. While this project is new, IBM has been studying brain-like computing as far back as 1956, when it created the world’s first (512 neuron) brain simulation.
“If this works, this is not just a 5 percent leap,” Modha said. “This is a leap of orders of magnitude forward. We have already overcome huge conceptual roadblocks.”
[Photo credits: Dean Takahashi, IBM]
We’ve included a number of videos that IBM has released describing the project.
IBM’s Bill Risk builds a brain wall.
Columbia University’s Stefano Fusi describes the brain vs. the computer.
IBM researchers John Arthur and Paul Merolla describe the inspiration for the project.
IBM researcher Dan Friedman discusses circuit architecture of the brain computer.
Steven Esser of IBM research describes the software
An overview of cognitive computing.
VentureBeat’s VB Insight team is studying marketing analytics...
Chime in here, and we’ll share the results