Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
Nvidia already has a worldwide reputation and a No. 1 market share designation for making top-flight graphics processing units (GPUs) to render images, video, and 2D or 3D animations for display. Lately, it has used its success to venture into IT territory, but without making hardware.
One year after the company launched Nvidia Fleet Command, a cloud-based service for deploying, managing, and scaling AI applications at the edge, it launched new features that help address the distance between these servers by improving the management of edge AI deployments around the world.
Edge computing is a distributed computing system with its own set of resources that allows data to be processed closer to its origin instead of having to transfer it to a centralized cloud or data center. Edge computing speeds up analysis by reducing the latency time involved in moving data back and forth. Fleet Command is designed to enable the control of such deployments through its cloud interface.
“In the world of AI, distance is not the friend of many IT managers,” Nvidia product marketing manager Troy Estes wrote in a blog post. “Unlike data centers, where resources and personnel are consolidated, enterprises deploying AI applications at the edge need to consider how to manage the extreme nature of edge environments.”
Intelligent Security Summit
Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.
Cutting out the latency in remote deployments
Often, the nodes connecting data centers or clouds and a remote AI deployment are difficult to make fast enough to use in a production environment. With the large amount of data that AI applications require, it takes a highly performative network and data management to make these deployments work well enough to satisfy service-level agreements.
“You can run AI in the cloud,” Nvidia senior manager of AI video Amanda Saunders told VentureBeat. “But typically the latency that it takes to send stuff back and forth – well, a lot of these locations don’t have strong network connections; they may seem to be connected, but they’re not always connected. Fleet Command allows you to deploy those applications to the edge but still maintain that control over them so that you’re able to remotely access not just the system but the actual application itself, so you can see everything that’s going on.”
With the scale of some edge AI deployments, organizations can have up to thousands of independent locations that must be managed by IT. Sometimes these must run in extremely remote locations, such as oil rigs, weather gauges, distributed retail stores, or industrial facilities. These connections are not for the networking faint of heart.
Nvidia Fleet Command offers a managed platform for container orchestration using Kubernetes distribution that makes it relatively easy to provision and deploy AI applications and systems in thousands of distributed environments, all from a single cloud-based console, Saunders said.
Optimizing connections is also part of the task
Deployment is only one step in managing AI applications at the edge. Optimizing these applications is a continuous process that involves applying patches, deploying new applications, and rebooting edge systems, Estes said. The new Fleet Command features are designed to make these workflows work in a managed environment with:
- Advanced remote management: Remote management on Fleet Command now has access controls and timed sessions, eliminating vulnerabilities that come with traditional VPN connections. Administrators can securely monitor activity and troubleshoot issues at remote edge locations from the comfort of their offices. Edge environments are extremely dynamic — which means administrators responsible for edge AI deployments need to be just as dynamic to keep up with rapid changes and ensure little deployment downtime. This makes remote management a critical feature for every edge AI deployment.
- Multi-instance GPU (MIG) provisioning: MIG is now available on Fleet Command, enabling administrators to partition GPUs and assign applications from the Fleet Command user interface. By allowing organizations to run multiple AI applications on the same GPU, MIG enables organizations to right-size their deployments and get the most out of their edge infrastructure.
Several companies have been using Fleet Command’s new features in a beta program for these use cases:
- Domino Data Lab, which provides an enterprise MLops platform that allows data scientists to experiment, research, test and validate AI models before deploying them into production;
- video management provider Milestone Systems, which created AI Bridge, an application programming interface gateway that makes it easy to give AI applications access to consolidated video feeds from dozens of camera streams; and
- IronYun AI platform Vaidio, which applies AI analytics to helping retailers, bands, NFL stadiums, factories, and others fuel their existing cameras with the power of AI.
The edge AI software management market is projected by Astute Analytics to reach $8.05 billion by 2027. Nvidia is competing in the market along with Juniper Networks, VMWare, Cloudera, IBM and Dell Technologies, among others.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.