Learn how your company can create applications to automate tasks and generate further efficiencies through low-code/no-code tools on November 9 at the virtual Low-Code/No-Code Summit. Register here.
Today, machine learning model development start-up OmniML announced it has launched with $10 million in seed funding to provide enterprises with an artificial intelligence (AI) deployment platform for edge devices. The funding round, led by GGV Capital, will enable OmniML to expand its machine learning (ML) team and enhance its software development.
OmniML’s solution enables users to design, optimize and deploy advanced machine learning models to hardware devices at the network edge. It’s designed to enable enterprises to use small, scalable, and efficient machine learning models to give edge devices the ability to perform AI inference tasks.
The organization claims this approach makes major machine learning tasks 10 times faster on various edge devices, which gives enterprises and technical decision makers a potential solution for deploying AI applications like computer vision at the network’s edge.
Pushing AI to the edge with ML models
As researchers expect organizations to invest over $434 billion in AI in 2022 to gain access to greater insights, there is a growing need for ML solutions that can equip AI solutions to perform at the network’s edge without overloading the hardware. This is a challenge, as most existing AI solutions aren’t lightweight enough to run on edge devices.
Join today’s leading executives at the Low-Code/No-Code Summit virtually on November 9. Register for your free pass today.
“Today’s AI is too big, as modern deep learning requires a massive amount of computational resources, carbon footprint, and engineering efforts. This makes AI on edge devices extremely difficult because of the limited hardware resources, the power budget, and deployment challenges,” said Di Wu, cofounder and CEO of OmniML.
“The fundamental cause of the problem is the mismatch between AI models and hardware, and OmniML is solving it from the root by adapting the algorithms for edge hardware,” Wu said. “This is done by improving the efficiency of a neural network using a combination of model compression, neural architecture rebalances, and new design primitives.”
This approach, which grew out of the research of Song Han, an assistant professor of electrical engineering and computer science at MIT, uses a “deep compression” technique that reduces the size of the neural network without losing accuracy, so the solution can better optimize ML models for different chips and devices at the networks edge.
The drive for scalable edge AI
Research calculates that the global edge AI software market size was valued at $590 million in 2020, and will reach 1,835 million by 2026 as 5G networks develop and the number of devices connected to modern networks increases.
As enterprises demand increasingly decentralized technologies, many vendors are starting to focus on developing solutions to make AI inference tasks viable at the network’s edge.
Another competitor is Edge Impulse, a low-code development platform designed specifically to help users design, test, and deploy machine learning models to edge devices, which recently obtained $34 million as part of a Series B funding round.
While these providers are highly successful, Wu argues that OmniML stands out from other providers because it builds efficient algorithms from the ground up, rather than just compressing them.
“All existing solutions focus on downstream optimizations, like quantization, pruning, compiler optimizations, etc. Yet none of them is trying to solve the fundamental problems: existing AI models are not designed for constrained edge hardware. By focusing on the fundamental algorithms, our solution provides maximum scalability. It truly works for any model, any hardware, and any task.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.