VentureBeat presents: AI Unleashed - An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More


During the virtually held Robotics: Science and Systems 2020 conference this week, scientists affiliated with the National University of Singapore (NUS) presented research that combines robotic vision and touch sensing with Intel-designed neuromorphic processors. The researchers claim the “electronic skin” — dubbed Asynchronous Coded Electronic Skin (ACES) — can detect touches more than 1,000 times faster than the human nervous system and identify the shape, texture, and hardness of objects within 10 milliseconds. At the same time, ACES is designed to be modular and highly robust to damage, ensuring it can continue functioning as long as at least one sensor remains.

The human sense of touch is fine-grained enough to distinguish between surfaces that differ by only a single layer of molecules, yet the majority of today’s autonomous robots operate solely via visual, spatial, and inertial processing techniques. Bringing humanlike touch to machines could significantly improve their utility and even lead to new use cases. For example, robotic arms with artificial “skin” could employ tactile sensing to detect and grip unfamiliar objects with just the right amount of pressure.

Drawing inspiration from the human sensory nervous system, the NUS team spent a year and a half developing their system. ACES comprises an electrical conductor connected to a network of sensors, which collect signals to enable the system to differentiate contact between sensors. ACES takes less than 60 nanoseconds to detect touch — reportedly the fastest rate to date for “electronic skin.”

An Intel Loihi neuromorphic chip processes the data collected by the ACES sensors. (Neuromorphic engineering, also known as neuromorphic computing, describes the use of circuits that mimic the nervous system’s neurobiological architectures.) The 14-nanometer processor, which has a 60-millimeter die size and contains over 2 billion transistors, features a programmable microcode engine for on-chip training of asynchronous spiking neural networks (SNNs). SNNs incorporate time into their operating model so the components of the model don’t process input data simultaneously, supporting workloads like touch perception that involve self-modifying and event-driven parallel computations.

Event

AI Unleashed

An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.

 

Learn More

According to Intel, Loihi processes information up to 1,000 times faster and 10,000 more efficiently than traditional processors, and it can solve certain types of optimization problems with gains in speed and energy efficiency greater than three orders of magnitude. Moreover, Loihi maintains real-time performance results and uses only 30% more power when scaled up 50 times (whereas traditional hardware uses 500% more power). It also consumes roughly 100 times less energy than widely used CPU-run simultaneous location and mapping methods.

Intel neuromorphic skin

Above: A visualization showing the ACES sensor feedback.

Image Credit: Intel

In their initial experiment, the NUS researchers used a robotic hand fitted with ACES to read Braille, passing the tactile data to Loihi via the cloud. Loihi achieved over 92% accuracy in classifying the Braille letters while using 20 times less power than a standard classical processor, according to the research.

Building on this work, the NUS team further improved ACES’ perception capabilities by combining vision and touch data in an SNN. To do so, they tasked a robot with classifying various opaque containers containing differing amounts of liquid, using sensory inputs from ACES and recordings from an RGB video camera. Leveraging the same tactile and vision sensors, they also tested the ability of the perception system to identify rotational slip, an important metric for object grasping.

Once this sensory data had been captured, the team sent it to both a graphics card and a Loihi chip to compare processing capabilities. The results show that combining vision and touch with an SNN led to 10% greater object classification accuracy versus a vision-only system. They also demonstrate Loihi’s prowess for sensory data processing: The chip was 21% faster than the best-performing graphics card while using 45 times less power.

ACES can be paired with other synthetic “layers” of skin, like the transparent self-healing sensor skin layer developed by NUS assistant professor Benjamin Tee (a coauthor of the ACES research). Potential applications include disaster recovery robots and prosthetic limbs that help disabled people restore their sense of touch.

Along with Intel, researchers at IBM, HP, MIT, Purdue, and Stanford hope to leverage neuromorphic computing to develop supercomputers a thousand times more powerful than any today. Chips like Loihi excel at constraint satisfaction problems, which require evaluating a large number of potential solutions to identify the one or few that satisfy specific constraints. They’ve also been shown to rapidly identify the shortest paths in graphs and perform approximate image searches, as well as mathematically optimizing specific objectives over time in real-world optimization problems.

ACES is among the first practical demonstration of the technology’s capabilities, following Intel research showing neuromorphic chips can be used to “teach” an AI model to distinguish between 10 different scents.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.