The world is about to get a whole lot smarter.
As the new decade begins, we’re hearing predictions on everything from fully remote workforces to quantum computing. However, one emerging trend is scarcely mentioned on tech blogs – one that may be small in form but has the potential to be massive in implication. We’re talking about microcontrollers.
There are 250 billion microcontrollers in the world today. 28.1 billion units were sold in 2018 alone, and IC Insights forecasts annual shipment volume to grow to 38.2 billion by 2023.
Perhaps we are getting a bit ahead of ourselves though, because you may not know exactly what we mean by microcontrollers. A microcontroller is a small, special purpose computer dedicated to performing one task or program within a device. For example, a microcontroller in a television controls the channel selector and speaker system. It changes those systems when it receives input from the TV remote. Microcontrollers and the components they manage are collectively called embedded systems since they are embedded in the devices they control. Take a look around — these embedded systems are everywhere, in nearly any modern electronic device. Your office machines, cars, medical devices, and home appliances almost all certainly have microcontrollers in them.
With all the buzz about cloud computing, mobile device penetration, artificial intelligence, and the Internet of Things (IoT) over the past few years, these microcontrollers (and the embedded systems they power) have largely been underappreciated. This is about to change.
The strong growth in microcontroller sales in recent years has been largely driven by the broad tailwinds of the IoT. Microcontrollers facilitate automation and embedded control in electronic systems, as well as the connection of sensors and applications to the IoT. These handy little devices are also exceedingly cheap, with an average price of 60 cents per unit (and dropping). Although low in cost, the economic impact of what microcontrollers enable at the system level is massive, since the sensor data from the physical world is the lifeblood of digital transformation in industry. However, this is only part of the story.
A coalescence of several trends has made the microcontroller not just a conduit for implementing IoT applications but also a powerful, independent processing mechanism in its own right. In recent years, hardware advancements have made it possible for microcontrollers to perform calculations much faster. Improved hardware coupled with more efficient development standards have made it easier for developers to build programs on these devices. Perhaps the most important trend, though, has been the rise of tiny machine learning, or TinyML. It’s a technology we’ve been following since investing in a startup in this space.
TinyML broadly encapsulates the field of machine learning technologies capable of performing on-device analytics of sensor data at extremely low power. Between hardware advancements and the TinyML community’s recent innovations in machine learning, it is now possible to run increasingly complex deep learning models (the foundation of most modern artificial intelligence applications) directly on microcontrollers. A quick glance under the hood shows this is fundamentally possible because deep learning models are compute-bound, meaning their efficiency is limited by the time it takes to complete a large number of arithmetic operations. Advancements in TinyML have made it possible to run these models on existing microcontroller hardware.
In other words, those 250 billion microcontrollers in our printers, TVs, cars, and pacemakers can now perform tasks that previously only our computers and smartphones could handle. All of our devices and appliances are getting smarter thanks to microcontrollers.
TinyML represents a collaborative effort between the embedded ultra-low power systems and machine learning communities, which traditionally have operated largely independently. This union has opened the floodgates for new and exciting applications of on-device machine learning. However, the knowledge that deep learning and microcontrollers are a perfect match has been pretty exclusive, hidden behind the walls of tech giants like Google and Apple. This becomes more obvious when you learn that this paradigm of running modified deep learning models on microcontrollers is responsible for the “Okay Google” and “Hey Siri,” functionality that has been around for years.
But why is it important that we be able to run these models on microcontrollers? Much of the sensor data generated today is discarded because of cost, bandwidth, or power constraints – or sometimes a combination of all three. For example, take an imagery micro-satellite. Such satellites are equipped with cameras capable of capturing high resolution images but are limited by the size and number of photos they can store and how often they can transmit those photos to Earth. As a result, such satellites have to store images at low resolution and at a low frame rate. What if we could use image detection models to save high resolution photos only if an object of interest (like a ship or weather pattern) was present in the image? While the computing resources on these micro-satellites have historically been too small to support image detection deep learning models, TinyML now makes this possible.
Another benefit of deploying deep learning models on microcontrollers is that microcontrollers use very little energy. Compared to systems that require either a direct connection to the power grid or frequent charges or replacement of the battery, a microcontroller can run an image recognition model continuously for a year with a single coin battery. Furthermore, since most embedded systems are not connected to the internet, these smart embedded systems can be deployed essentially anywhere. By enabling decision-making without continuous connectivity to the internet, the ability to deploy deep learning models on embedded systems creates an opportunity for completely new types of products.
Early TinyML applications
It’s easy to talk about applications in the abstract, but let’s narrow our focus to specific applications likely to be available in the coming years that would impact the way we work or live:
Mobility: If we apply TinyML to sensors ingesting real-time traffic data, we can use them to route traffic more effectively and reduce response times for emergency vehicles. Companies like Swim.AI use TinyML on streaming data to improve passenger safety and reduce congestion and emissions through efficient routing.
Smart factory: In the manufacturing sector, TinyML can stop downtime due to equipment failure by enabling real-time decision. It can alert workers to perform preventative maintenance when necessary, based on equipment conditions.
Retail: By monitoring shelves in-store and sending immediate alerts as item quantities dwindle, TinyML can prevent items from becoming out of stock.
Agriculture: Farmers risk severe profit losses from animal illnesses. Data from livestock wearables that monitor health vitals like heart rate, blood pressure, temperature, etc. can help predict the onslaught of disease and epidemics.
Before TinyML goes mainstream …
As intriguing as TinyML may be, we are very much in the early stages, and we need to see a number of trends occur before it gets mainstream adoption.
Every successful ecosystem is built on engaged communities. A vibrant TinyML community will lead to faster innovation as it increases awareness and adoption. We need more investments in open-source projects supporting TinyML (like the work Google is doing around TensorFlow for broader machine learning), since open source allows each contributor to build on top of the work of others to create thorough and robust solutions.
Other core ecosystem participants and tools will also be necessary:
- Chipset manufacturers and platforms like Qualcomm, ST, and ETA Compute can work hand-in-hand with developers to ensure chipsets are ready for the intended applications, and that platform integrations are built to facilitate rapid application development.
- Cloud players can invest in end-to-end optimized platform solutions that allow seamless exchange and processing of data between devices and the cloud.
- Direct support is needed from device-level software infrastructure companies such as Memfault, which is trying to improve firmware reliability, and Argosy Labs, which is tackling data security and sharing on the device level. These kinds of changes give developers more control over software deployments with greater security from nearly any device.
- Lifecycle TinyML tools need to be built that facilitate dataset management, algorithm development, and version management and that enhance the testing and deployment lifecycle.
However, innovators are ultimately what drives change. We need more machine learning experts who have the resources to challenge the status quo and make TinyML even more accessible. Pete Warden, head of the TensorFlow mobile team, has an ambitious task of building machine learning applications that run on a microcontroller for a year using only a hearing aid battery for power. We need more leaders like Pete to step up and lead breakthroughs to make TinyML a near-term reality.
In summary: TinyML is a giant opportunity that’s just beginning to emerge. Expect to see quite a bit of movement in this space over the next year or two.
[Find out about VentureBeat guest posts.]
TX Zhuo is General Partner at Fika Ventures. Huston Collins is Senior Associate at Fika Ventures.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more