Intel and Aaeon made it easier for hardware companies to build a machine learning accelerator into their products with today’s launch of a new circuit board called the AI Core. That board contains a Movidius Myriad 2 Vision Processing Unit that speeds up execution of AI algorithms while only drawing around a watt of power.

That’s the same sort of capability hardware makers can get from the Movidius Neural Compute Stick, which looks like a somewhat bulky USB flash drive but offers AI acceleration. Ever since Intel released that hardware last year, it has picked up a following among hardware startups, makers, and developers interested in experimenting with AI.

The stick is optimized for speeding up the execution of different types of machine learning algorithms, including convolutional neural networks, which are the backbone of many image recognition systems. Companies are increasingly building and deploying AI hardware because it can mitigate the computation and power requirements of the intelligent software.

Having a bulky USB stick that serves as an integral part of a robot and that could be knocked or pulled out isn’t exactly practical, which is why the AI Core now exists. When companies want to bring their AI hardware into production, they can move from using the Neural Compute Stick to the AI Core without changing their code.

Customers who want to take advantage of the other features in the Myriad 2, like video encoding accelerators, will still have to source the chips from Intel directly, rather than using the AI Core.

All of this is part of Intel’s overall strategy of building more AI-specific hardware following its acquisition of several key companies in the space, including Movidius, Altera, Mobileye, and Nervana. That entire arena is a battleground for established chipmakers and new startups, since machine learning algorithms can require a great deal of compute power that can be helped significantly by specialized silicon.