The new servers are powered by the IBM Power9 processor, which was designed for compute-intensive AI workloads. It’s one more sign that server companies and chip designers are repositioning for the day when AI eats up most of our available computing power.
IBM said the new Power9 systems can improve the training times of deep learning neural networks by 4X, allowing enterprises to build more accurate AI applications and run them faster. Deep learning is a machine learning method that extracts information by crunching through millions of processes and data to detect and rank the most important aspects of the data.
The new Power9-based AC922 Power Systems are the first to embed PCI-Express 4.0, next-generation Nvidia NVLink, and OpenCAPI, which combined can accelerate data movement, calculated at 9.5X faster than rival PCI-E 3.0 systems based on the Intel-compatible x86 chips, IBM said.
The system was designed to drive performance improvements across popular AI frameworks such as Chainer, TensorFlow, and Caffe, as well as accelerated databases such as Kinetica.
IBM is targeting the systems at data scientists who are focused on deep learning insights in scientific research, real-time fraud detection, and credit risk analysis.
“Google is excited about IBM’s progress in the development of the latest Power technology,” said Bart Sano, vice president of Google Platforms, in a statement. “The Power9 OpenCAPI Bus and large memory capabilities allow for further opportunities for innovation in Google datacenters.”
IBM has been working on the Power9 chips for more than four years.
“We’ve built a game-changing powerhouse for AI and cognitive workloads,” said Bob Picciano, senior vice president of IBM Cognitive Systems, in a statement. “In addition to arming the world’s most powerful supercomputers, IBM Power9 Systems is designed to enable enterprises around the world to scale unprecedented insights, driving scientific discovery enabling transformational business outcomes across every industry.”