Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

Let the OSS Enterprise newsletter guide your open source journey! Sign up here.

Dell today announced the release of Omnia, an open source software package aimed at simplifying AI and compute-intensive workload deployment and management. Developed at Dell’s High Performance Compute (HPC) and AI Innovation Lab in collaboration with Intel and Arizona State University (ASU), Omnia automates the provisioning and management of HPC, AI, and data analytics to create a pool of hardware resources.

The release of Omnia comes as enterprises are turning to AI during the health crisis to drive innovation. According to a Statista survey, 41.2% of enterprise say that they’re competing on data and analytics, while 24% say they’ve created data-driven organizations.  Meanwhile, 451 Research reports that 95% of companies surveyed for its recent study consider AI technology to be important to their digital transformation efforts.

Dell describes Omnia as a set of Ansible playbooks that speed the deployment of converged workloads with containers and Slurm, along with library frameworks, services, and apps. Ansible, which was originally created by Red Hat, helps with configuration management and app deployment, while Slurm is a job scheduler for Linux used by many of the world’s supercomputers and computer clusters.


Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.

Register Now


Omnia automatically imprints software solutions onto servers — specifically networked Linux servers — based on the particular use case. For example, these might be HPC simulations, neural networks for AI, or in–memory graphics processing for data analytics. Dell claims that Omnia can reduce deployment time from weeks to minutes.

“As AI with HPC and data analytics converge, storage and networking configurations have remained in silos, making it challenging for IT teams to provide required resources for shifting demands,” Peter Manca, senior VP at Dell Technologies, said in a press release. “With Dell’s Omnia open source software, teams can dramatically simplify the management of advanced computing workloads, helping them speed research and innovation.”


Above: A flow chart describing how Omnia works.

Image Credit: Omnia

Omnia can build clusters that use Slurm or Kubernetes for workload management, and it tries to leverage existing projects rather than reinvent the wheel. The software automates the cluster deployment process, starting with provisioning the operating system to servers, and can install Kubernetes, Slurm, or both, along with additional drivers, services, libraries, and apps.

“Engineers from ASU and Dell Technologies worked together on Omnia’s creation,” Douglas Jennewein, ASU senior director of research computing, said in a statement. “It’s been a rewarding effort working on code that will simplify the deployment and management of these complex mixed workloads, at ASU and for the entire advanced computing industry.”

In a related announcement today, Dell said that it’s expanding its HPC on demand offering to support VMware environments to include VMware Cloud Foundation, VMware Cloud Director, and VMware vRealize Operations. Beyond this, the company now offers Nvidia A30 and A10 Tensor Core GPUs as options for its Dell EMC PowerEdge R750, R750xa, and R7525 servers.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.