ARM is unveiling its ambitious new machine learning processor platform, dubbed Project Trillium. The platform includes processors and sensors for improving artificial intelligence operations in mobile devices at the edge of networks, rather than in data centers.
ARM has created a high-end processor to handle machine learning calculations, or those that enable computers to learn without explicitly being programmed to perform certain tasks.
“Project Trillium is a whole new class of product with hardware and software,” said Jem Davies, vice president, fellow, and general manager of ARM’s Machine Learning Group. “We looked at GPUs (graphics processing units) and CPUs (central processing units), but it became clear that executing with the best efficiency required a ground-up design specific to machine learning.”
ARM believes that putting machine learning into mobile devices is the best computing solution for the future. If we kept much of the AI in the cloud, or web-connected data centers, then we would have to send too much data through the Internet to feed those AI processors, Davies said.
“You have to do more of the processing locally, in your mobile device,” Davies said. “If you sent video to the cloud, there isn’t enough bandwidth in the world to handle it. You can’t afford the power to keep those data centers going. It has cost, latency, reliability, and security problems. That is why machine learning is moving to the edge. We believe that machine learning processors will significantly outperform GPUs and CPUs.”
ARM will provide the first designs to its partners in mid-2018, and the first chips could debut late this year or next year.
ARM has also created an object detection processor, for detecting people and patterns in images and videos. And it has created neural network software libraries.
ARM is targeting mobile markets, which have 1.7 billion to 2.2 billion units, according to market researcher Strategy Analytics. Smart internet-connected cameras are expected to grow from 160 million units now to 1.3 billion units within 10 years, and AI-enabled devices are expected to grow from 300 million to 3.2 billion by 2028, according to market researcher Gartner.
ARM kept those trends in mind as it designed a processor that could be scalable to both low-end and high-end machine learning applications, depending on the number of cores being used.
“To hit the levels of power or thermals in a constrained environment, machine learning has to be done with a high level of power efficiency,” Davies said.
ARM expects to have a family of machine learning processors, with the first one targeting mobile devices. It will operate at an estimated 4.6 trillion operations per second. Software optimizations could provide uplift of two to four times more performance in real-world applications.
ARM is targeting the chips for 7-nanometer manufacturing. Consumer products could come out by mid-2019.
ARM’s second-generation object-detection processor can detect in real time at full high-definition resolutions and 60 frames per second. It can identify objects as small as 50 pixels by 60 pixels, and it operates about 80 times faster than a rival digital signal processor. It can figure out which direction people are facing.
“We can track people in real time at a fast frame rate,” Davies said.
The machine learning processor will sit alongside another ARM CPU in a system. And it could do amazing things, like identifying the fish around you when you take a camera underwater while diving or snorkeling.
The smartphone processors need to operate at around 1.5 to 2 watts. Internet of Things applications at the low end could include cameras that recognize when a trash can is full and needs pickup service, Davies said.
“So much can be done when you have smart processing,” he said. “Maybe you can detect if a small child walking around on their own is lost.”
Davies foresees the day when phone makers, eager to differentiate themselves, will talk about better machine learning applications rather than other hardware specs.