VentureBeat presents: AI Unleashed - An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More

Follow along with VentureBeat’s ongoing coverage from Nvidia’s GTC 2022 event. >>

San Francisco-based Domino Data Lab, a company known for its machine learning operations (MLops) platform, today announced new integrations with Nvidia as part of an effort to empower enterprises with faster GPU-accelerated model deployments. 

At the ongoing GTC Spring event, the company said its platform will now support Nvidia’s cloud service Fleet Command, which will enable customers to securely deploy, manage and scale AI models across distributed edge devices. 

The integration, according to the company, will reduce infrastructure friction and extend key enterprise MLops benefits – collaboration, reproducibility and model lifecycle management – to Nvidia-certified systems in environments such as retail stores, warehouses, hospitals, city street intersections. 


AI Unleashed

An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.


Learn More

Previously, data scientists trying to deploy and monitor GPU-accelerated models at the edge had to deal with the burden of IT and DevOps, but this integration brings a change. In essence, data scientists would just have to iterate on models using Domino’s MLops Platform and then use Fleet Command to orchestrate the edge AI lifecycle, starting from streamlining deployments and managing over-the-air updates to monitoring.

Support for MPI clusters

Among Command Fleet integration, Domino is also adding support for on-demand Message Passing Interface (MPI) clusters, allowing data scientists to use Nvidia DGX nods in the same Kubernetes cluster as Domino, as well as Nvidia’s NGC catalog and AI platform.

The MPI clusters, as explained by the company, will save data scientists’ time spent on administrative DevOps tasks, while the latter two will accelerate end-to-end workflows with a hub of AI frameworks (such as PyTorch or TensorFlow), industry-specific SDKs and pre-trained models.

Test-run on Nvidia LaunchPad

Finally, Domino also announced that its platform is coming on Nvidia’s LaunchPad program. This, the company said, will allow teams to quickly test AI projects on the complete stack underpinning joint Domino and Nvidia AI solutions. They could use proofs-of-concept validated by Domino and Nvidia for the trial run and eventually fast-track the projects from prototype to production.

“Streamlined deployment and management of GPU-accelerated models bring a true competitive advantage,” Thomas Robinson, VP of strategic partnerships and corporate development at Domino, said. “We led the charge as the first Enterprise MLops platform to integrate with Nvidia AI Enterprise, Nvidia Fleet Command and Nvidia LaunchPad. We are excited to help more customers develop innovative use cases to solve the world’s most important challenges.”

At the last GTC in November 2021, Domino had announced a fully-managed offering that leveraged its MLops platform to execute high-performance computing and data science workloads on Nvidia DGX systems, in the TCS enterprise cloud. The company also updated its platform to version 5.0, bringing the capability to increase model velocity for data science teams (a metric of how fast they can build and update models) among other things.

According to Cognilytica, the market for MLops solutions could grow from $350 million to $4 billion by 2025.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.