Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Sophisticated machine learning applications require not only enormous amounts of training data, but powerful computer hardware on which to train. An analysis conducted by San Francisco research firm OpenAI found that since 2012, the amount of compute used in the largest training runs has been increasing exponentially with a 3.4-month doubling time, and that it’s grown by more than 300,000 times over that same time period.
The trend spurred the development of supercomputers like the U.S. Department of Energy’s Sierra and Summit, which leverage dedicated accelerator chips to speed up AI computation. Now, IBM’s Hardware Center, in collaboration with New York State, SUNY Polytechnic Institute, and other members of IBM’s AI Hardware Center, has delivered a new machine for the Department of Computer Science at Rensselaer Polytechnic Institute (RPI) that’s optimized for state-of-the-art machine learning workloads.
It’s dubbed Artificial Intelligence Multiprocessing Optimized System, or AiMOS (in honor of Rensselaer cofounder Amos Eaton), and it will principally tackle projects in biology, chemistry, the humanities, and related domains underway at the new IBM Research AI Hardware Center on the SUNY campus in Albany. It’s built using the same IBM Power Systems technology as the aforementioned Sierra and Summit, including IBM Power9 processors and Nvidia Tesla V100 graphics cards. And at eight petaflops on the High-Performance Linpack benchmark, a yardstick of supercomputing performance, AiMOS is one of the most powerful computers to debut on the Top500 ranking on supercomputers.
In fact, IBM claims AiMOS is currently the most powerful supercomputer housed at a private university, the 24th-most powerful supercomputer in the world, and the third-most energy efficient.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
“As the home of one of the top high-performance computing systems in the U.S. and in the world, Rensselaer is excited to accelerate our ongoing research in AI, deep learning, and in fields across a broad intellectual front,” said Rensselaer president Shirley Ann Jackson. “The creation of new paradigms requires forward-thinking collaborators, and we look forward to working with IBM and the state of New York to address global challenges in ways that were previously impossible.”
AiMOS will provide modeling, simulation, and computation to advance the development of computing chips and systems optimized for AI algorithms, according to IBM executive vice president John E. Kelly III, with the goal of helping to make AI systems 1,000 times more efficient within the next decade. “[AI] … will help us solve some of our most pressing problems, from healthcare to security to climate change. In order to realize AI’s full potential, special purpose computing hardware is emerging as the next big opportunity.”
Of course, as speedy as AiMOS may be, it’s behind Intel’s Frontera, which can achieve peak performance of 38.7 quadrillion floating point operations per second. Frontera is the fastest computer designed for academic workloads like modeling and simulation, big data, and machine learning, a distinction it earned for the second time in November by nabbing fifth place in the Top500 list.
And AiMOS will certainly be outgunned by AMD’s Frontier, a machine with more than 1.5 exaflops of theoretical performance that’s anticipated to be delivered to Oak Ridge National Laboratory in 2021. Intel for its part plans to deploy the “exaflop-class” Aurora cluster at the Argonne National Laboratory within the next two years as part of the Energy Department’s Exascale Computing Project, a grant program that seeks to accelerate exascale computing research in the U.S.
Currently, the U.S. hosts five of the 10 fastest computers in the world, with China’s best — the TaihuLight at the National Supercomputing Center in Wuxi (built on Sunway’s SW26010 processor architecture) and the Tianhe-2A in Guangzhou — ranking third and fourth, respectively, at roughly 125 peak petaflops and 100 peak petaflops. Cray’s Piz Daint sits in sixth, ahead of Trinity at Los Alamos National Laboratory, Fujitsu’s AI Bridge Clouding Infrastructure in Japan, and Lenovo’s SuperMUC-NG in Germany.
The race between China and the U.S. is fierce. In Top500 rankings, China two years ago surpassed the United States in total number of ranked supercomputers for the first time, with 202 to 143. That trend accelerated the following year; according to the Top500 fall 2018 report, the number of ranked U.S. supercomputers fell to 108 as China’s total climbed to 229.
China and the U.S. are followed in the largest number of ranked supercomputers by Japan, which has over 30 systems; the U.K., with over 20; France with nearly 20; Germany with over 15; and Ireland with just over 10.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.