Hewlett Packard Enterprise announced what it is calling a big breakthrough — creating a prototype of a computer with a single bank of memory that can process enormous amounts of information. The computer, known as The Machine, is a custom-built device made for the era of big data.
HPE said it has created the world’s largest single-memory computer. The R&D program is the largest in the history of HPE, the former enterprise division of HP that split apart from the consumer-focused division.
If the project works, it could be transformative for society. But it is no small effort, as it could require a whole new kind of software.
“The secrets to the next great scientific breakthrough, industry-changing innovation, or life-altering technology hide in plain sight behind the mountains of data we create every day,” said Meg Whitman, CEO of Hewlett Packard Enterprise, in a statement. “To realize this promise, we can’t rely on the technologies of the past, we need a computer built for the Big Data era.”
The prototype unveiled today contains 160 terabytes (TB) of memory, capable of simultaneously working with the data held in every book in the Library of Congress five times over — or approximately 160 million books. It has never been possible to hold and manipulate whole data sets of this size in a single-memory system, and this is just a glimpse of the immense potential of Memory-Driven Computing, HPE said.
Based on the current prototype, HPE expects the architecture could easily scale to an exabyte-scale single-memory system and, beyond that, to a nearly limitless pool of memory — 4,096 yottabytes. For context, that is 250,000 times the entire digital universe today.
With that amount of memory, HPE said it will be possible to simultaneously work with every digital health record of every person on earth, every piece of data from Facebook, every trip of Google’s autonomous vehicles, and every data set from space exploration all at the same time — getting to answers and uncovering new opportunities at unprecedented speeds.
“We believe Memory-Driven Computing is the solution to move the technology industry forward in a way that can enable advancements across all aspects of society,” said Mark Potter, CTO at HPE and director of Hewlett Packard Labs, in a statement. “The architecture we have unveiled can be applied to every computing category — from intelligent edge devices to supercomputers.”
Memory-Driven Computing, as HPE calls this type of computer, puts memory, not the processor, at the center of the computing architecture. By eliminating the inefficiencies of how memory, storage, and processors interact in traditional systems today, Memory-Driven Computing can reduce the time needed to process complex problems from days to hours, hours to minutes, and minutes to seconds to deliver real-time intelligence.
“The Machine is architected the ways devices will be built in the future,” said Patrick Moorhead, analyst at Moor Insights & Strategy. “That is, with a massive memory footprint and a combination memory and storage. This help analytical and machine learning workloads and also allows accelerators to get direct access to a massive memory-storage footprint. Much of the industry is coming at it from a storage point of view, like with Intel’s 3D XPoint, which is speeding up storage. I expect the different approaches to mesh in three to five years.”
The new prototype has 160 TB of shared memory spread across 40 physical nodes, interconnected using a high-performance fabric protocol. It has an optimized Linux-based operating system (OS) running on ThunderX2, Cavium’s flagship second generation dual socket capable ARMv8-A workload optimized System on a Chip.
It also has photonics and optical communication links, including the new X1 photonics module. And HPE has built software programming tools designed to take advantage of abundant persistent memory.
“Cavium shares HPE’s vision for Memory-Driven Computing and is proud to collaborate with HPE on The Machine program,” said Syed Ali, president and CEO of Cavium, in a statement. ”HPE’s groundbreaking innovations in Memory-Driven Computing will enable a new compute paradigm for a variety of applications, including the next generation data center, cloud and high performance computing.”
Bob Sorenson, analyst at Hyperion Research, said in an email:
Basically, the Machine is an attempt to build, in essence, a new kind of computer architecture that integrates processors and memory seamlessly using a flexible interconnect scheme.. Although HPCs offer more computational capability each year, the ability of those systems to move data to and from memory to their ever more powerful – and numerous – processors is rapidly becoming the most significant bottleneck in HPC performance. This is the case across the entire spectrum of HPC uses cases that include traditional modeling and simulations, as well as new and emerging use cases in big data and deep learning. And it is only to get worse as more and more HPC jobs rely on larger sets.
The Machine represents a new way to integrate memory and processors, as a way to address that critical bottleneck, and by doing so, offers significant performance improvements on a number of existing applications, but perhaps more important, the ability to integrate closely processors and memory opens up a host of new algorithms and application that would not be workable on traditional HPCs. In addition, the Machine offers a straight -forward shared memory scheme that allows for faster and more effective software development of these new applications.
It’s not clear how well this new development will succeed in the marketplace, but at a minimum, I think that HPE should be applauded for its vision to push forward the state of the art in HPC technology with a bold new design, instead of merely trying to maintain the status quo in HPC development.