Neuromorphic engineering, also known as neuromorphic computing, describes the use of systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. Scientists at MIT, Perdue, Stanford, IBM, HP, and elsewhere have pioneered pieces of full-stack systems, but arguably few have come closer than Intel when it comes to tackling one of the longstanding goals of neuromorphic research — a supercomputer a thousand times more powerful than any today.
Case in point? Today during the Defense Advanced Research Projects Agency’s (DARPA) Electronics Resurgence Initiative 2019 summit in Detroit, Michigan, Intel unveiled a system codenamed “Pohoiki Beach,” a 64-chip computer capable of simulating 8 million neurons in total. Intel Labs managing director Rich Uhlig said Pohoiki Beach will be made available to 60 research partners to “advance the field” and scale up AI algorithms like spare coding and path planning.
“We are impressed with the early results demonstrated as we scale Loihi to create more powerful neuromorphic systems. Pohoiki Beach will now be available to more than 60 ecosystem partners, who will use this specialized system to solve complex, compute-intensive problems,” said Uhlig.
Pohoiki Beach packs 64 128-core, 14-nanometer Loihi neuromorphic chips, which were first detailed in October 2017 at the 2018 Neuro Inspired Computational Elements (NICE) workshop in Oregon. They have a 60-millimeter die size and contain over 2 billion transistors, 130,000 artificial neurons, and 130 million synapses, in addition to three managing Lakemont cores for task orchestration. Uniquely, Loihi features a programmable microcode learning engine for on-chip training of asynchronous spiking neural networks (SNNs) — AI models that incorporate time into their operating model, such that components of the model don’t process input data simultaneously. This will be used for the implementation of adaptive self-modifying, event-driven, and fine-grained parallel computations with high efficiency.
The Loihi development toolchain comprises the Loihi Python API, a compiler, and a set of runtime libraries for building and executing SNNs on Loihi. It provides a way to create a graph of neurons and synapses with custom configurations, such as decay time, synaptic weight, and spiking thresholds, and a means of simulating those graphs by injecting external spikes through custom learning rules.
According to Intel, Loihi processes information up to 1,000 times faster and 10,000 more efficiently than traditional processors, and it can solve certain types of optimization problems with more than three orders of magnitude gains in speed and energy efficiency as compared to conventional CPU operations. Moreover, the chipmaker claims that Loihi maintains real-time performance results and uses only 30% more power when scaled up 50 times, whereas traditional hardware uses 500% more power. And it says that the chip consumes roughly 100 times less energy than widely used CPU-run simultaneous location and mapping methods.
“With [Loihi], we’ve been able to demonstrate 109 times lower power consumption running a real-time deep learning benchmark compared to a GPU, and 5 times lower power consumption compared to specialized IoT inference hardware,” said co-CEO of Applied Brain Research and professor at University of Waterloo Chris Eliasmith, whose team was provided access to the Loihi chip.
Intel says that later this year it will introduce an even larger Loihi system — Pohoki Springs — that will deliver an “unprecedented” level of performance and efficiency for neuromorphic workloads with upwards of 100 million neurons. Additionally, the Santa Clara company says it will continue to provide access to its Loihi cloud systems and Kapoho Bay, a Loihi-based USB form factor system, through the Intel Neuromorphic Research Community.
The audio problem: Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here