As easy performance advances slow down, chip designers are turning to specialized processors for neural networks, self-driving cars, and the Internet of Things to make progress in electronics design.

Moore’s Law, the 1965 prediction by Intel chairman emeritus Gordon Moore that the number of components will double every year, isn’t quite what it used to be. As a result, the easy gains in performance that chip makers used to enjoy with every new generation are harder to come by, said Linley Gwennap, principal analyst at the Linley Group.

Vision processing is needed for face recognition.

Above: Vision processing is needed for face recognition.

Image Credit: The Linley Group

Speaking at the Linley Processor Forum in Santa Clara, California, Gwennap said that Moore’s Law is only for the rich. Big chip makers like Intel can afford the multibillion-dollar investments in chip factories to stay on the edge in manufacturing, which yields faster, cheaper, and smaller chips with each generation of chip-making equipment. Costs are rising rapidly, and only those who can charge a lot of money for chips will move to 14-nanometer or 10-nanometer manufacturing from the current 28-nanometer mainstream technology today.

In some ways, this slowdown in tech progress is forcing designers to be more thoughtful. They have to conceive architectural breakthroughs for chips, rather than rely on manufacturing advances. In the past, only general-purpose processors had the manufacturing volumes to justify the investment in factories. But now, designers have to create specialized processors.

And the payoff can be big in more custom designs. Gwennap said that the special designs can improve performance per watt, a measure of power efficiency, by 10 times to 100 times.

Vision processors have become important in the past few years with the rise of image recognition tasks in computing. Self-driving cars need them to see bicyclists and pedestrians. Computers need them to recognize faces. Game consoles need them to parse voices. Companies that provide these vision processors include Cadence, Ceva, Synopsys, and VeriSilicon. Freescale acquired Cognivue and Intel bought Movidius to stay current in this space.

Linley Gwennap, founder of the Linley Group.

Above: Linley Gwennap, founder of the Linley Group.

Image Credit: Dean Takahashi

Neural network processors have also become fashionable, with the successes in training brain-like networks for artificial intelligence. Google developed its own custom chip for TensorFlow; Wave Computing is talking about a new chip this week; Nvidia introduced its first AI processors this year; IBM is using brain-like chips for AI processing; and Intel acquired Nervana Systems for AI processing.

Meanwhile, Intel dominates the server chip market, which generated $10 billion in 2015. Gwennap said that Intel has 99 percent of the server chip market, prompting renewed competition, since server vendors don’t want to be dependent on just one supplier, Gwennap said. IBM, ARM, and Advanced Micro Devices continue to try to outfox Intel with more power-efficient server chip designs. Qualcomm and Broadcom are expected to launch ARM-based server chips in 2017, Gwennap said. In servers, network interface cards are offloading work from the CPUs. But memory chip speeds are relatively slow and remain a big bottleneck in servers.

Gwennap said he believes that the Internet of Things, or making everyday objects smart and connected, will provide a big opportunity for chip makers who can provide everything from sensors to the processors required to interpret the data they collect. As cities implement everything from smart street lights to smart parking meters, the demand for processors will rise into the billions of units.

Fergus Casey, senior R&D manager for ARC Processors at Synopsys, said in a talk at the conference that the Internet of Things opportunity will be big, but security will become more important for the processors and sensors that deliver smart infrastructure.

“There is no silver bullet to security at the edge,” Casey said. “You really have to look at multiple layered lines of defense.”

Self-driving cars will also require a combination of vision processors and complex data analysis. No longer will simple microcontrollers from years past be able to handle the huge workloads in cars.

“You may need a neural network and a combination of multiple processors,” Gwennap said.

He predicted that the first partially autonomous vehicles will appear in 2018, compared to the driver pilot assist options from Volvo and Tesla today. And Ford promises a fully autonomous vehicle by 2021. Based on that timeline, Gwennap predicts the hardware electronics in a car will rise to a $5,000 bill of materials cost by 2022. That cost is OK if a taxi service can save $30,000 a year per car on driver pay.

That means that chip designers will have a lot of work to do, and they’ll be busy for years to come, whether Moore’s Law helps them out or not.