Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
It’s not often the world of semiconductors is turned on its head. It’s clear that a similar transformation is occurring as a superabundance of start-ups takes on the challenge of low-power neural nets.
These start-ups are trying to move neural network-based machine learning from the cloud data center to embedded systems in the field – to what’s now called “the edge.” Making chips work in this new world will require new ways of setting up neurals, designing memory paths, and compiling to hardware.
Establishing this new formula will challenge the brightest heads in electrical engineering. But the push has begun for edge AI. It’s spawned myriad startups, including Axelera.AI, Deep Vision, EdgeQ, Hailo, Sima.ai, and many more.
Opportunities abound for edge AI startups
Driving this, according to analyst firm ABI Research, is the need for local data processing, low latency, and avoidance of repeated calls to AI chips back on the cloud. The firm also cites better data privacy as an impetus. It’s all seen as an opening for upstarts in an edge AI chipset market that ABI estimates will grow to $28 billion in 2026, for a compound annual growth rate (CAGR) of 28.4% from 2021 to 2026.
That growth will require designs that move beyond bellwether AI apps, like those that recognize images of cats and dogs, created in power-rich cloud data centers. That quest to expand use cases should bring pause to optimists.
“Making the chips is one thing, but getting them to work across many different neural network types is another. We are not there yet,” said Marian Verhelst, a circuits and systems researcher at Katholieke Universiteit Leuven and the Imec tech hub in Belgium, as well as a member of the TinyML Foundation, who spoke with VentureBeat.
“Still, it’s a really cool time to be active in this new domain,” adds Verhelst, who is also an advisor to Netherlands-based Axelera.AI. The company recently gained $12 million in seed funding from security infrastructure provider Bitfury to pursue Edge AI chips.
What matters when it comes to designing this new chip generation? Chip designers and their customers alike now need to explore the question. In an interview, Verhelst outlined the pressing points as she saw them:
- The shape of the neural network matters. Re-using data points saves energy in neural processing, but different neural schemes lead to different design tradeoffs. You must decide how flexible and software-programmable you want your system to be – and that affects power area performance. Said Verhelst: “How much you can use a specific data element depends very strongly on the specific topology of your neural network layer. It turns out there is not a single architecture that can [handle] all types of neural networks efficiently. It’s a question of whether you can make your data flow control flexible enough such that it can map to a wide variety of neural layers.”
- Memory path hierarchy matters. Keeping the processor fed with data is the objective in designing a memory path for neural processing. Said Verhelst: “With Moore’s law, we can put a lot of multipliers on a chip. That’s the easy part. The challenge is to provide them all with the necessary data every clock cycle, and to do that you need a memory hierarchy with sufficient bandwidth, where data is reused at different levels depending on how often you need the data again. That can really impact performance.”
- Algorithm mapping matters. Compiling code to run efficiently on underlying hardware is something of an eternal quest. However, while this is an art nearly mastered for conventional ICs, it is still a work in progress for Edge AI chips. Said Verhelst: “Compiler chains are really not yet mature. There is no standardized compilation flow, although people are trying to develop it with initiatives like EVM and Glow. The problem is that every accelerator looks different. People have to make their own low-level kernel functions for specific accelerators. And this is really a painful manual job.”
These matters drive design decisions at Axelera AI. The company is preparing to go to market with an accelerator chip centered around analog in-memory processing, transformer neural nets, and data flow architecture while consuming less than 10 watts.
“We put together the in-memory computing, which is a new paradigm in technology, and we merge this with a data flow architecture, which gives a lot of flexibility in a small footprint, with small power consumption,” said Axelera cofounder and CEO Fabrizio Del Maffeo, who emphasized that this is an accelerator that can work with an “agnostic” assortment of CPUs.
Del Maffeo cites vision systems, smart cities, manufacturing, drones, and retail as targets for Edge AI efforts.
The competition to forge a solution in edge AI is tough, but entrepreneurs like Del Maffeo and engineers like Verhelst will enthusiastically accept the challenge.
“It’s a very interesting time for hardware, chips, designers, and startups,” Verhelst said. “For the first time in a couple of decades, hardware really starts to be at the center of attention again.”
No doubt, it’s interesting to be there when a new IC architecture is born.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.