It’s been just a few short weeks since Apple unveiled the A12 Bionic, but at an event in New York City, the Cupertino company upstaged it with a more powerful model: the A12X Bionic. It’s the chip in the new iPad Pro.
Apple’s A12X is similarly built on a 7-nanometer process, but bigger than the A12.
“No other tablet, laptop, or even desktop has been able to make this leap forward,” John Ternus, vice president of hardware engineering, said onstage.
It has 10 billion transistors and comprises a seven-core GPU and eight-core CPU, the latter of which has four performance cores and four efficiency cores. (That’s compared to the A12’s six-core CPU and twelve-core GPU.) Single-core CPU performance is up to 35 percent faster compared to last year’s iPad Pro chip, and 90 percent faster in terms of multicore performance — in large part thanks to a new performance controller that allows the chip to run all processor cores simultaneously.
The graphics processing unit (GPU) is two times speedier, meanwhile, with better tessellation and multilayer rendering performance. And there’s a new storage controller that can efficiently handle up to 1TB storage.
Apple says it delivers “Xbox One S-class” graphics performance in a package that is much smaller, and claims it’s faster than 92 percent of all portable PCs.
The A12X, like the A12, has Apple’s eight-core Neural Engine, which is designed for real-time machine learning tasks like recognizing faces.
The Neural Engine is an eight-core chip (up from a two-core processor in the A11) that’s capable of up to five trillion operations per second (compared to 500 billion for the last-gen neural engine). Also in tow is a smart compute system that automatically determines whether to run algorithms on the processor, GPU, neural engine, or a combination of all three.
All those innovations allow it to deliver up to 5 trillion operations per second and “all-day” battery life.
Apps created with Core ML 2, Apple’s machine learning framework, can crunch numbers up to nine times faster on the A12X Bionic silicon with one-tenth of the power. Those apps launch up to 30 percent faster, too, thanks to algorithms that learn your usage habits over time.
Real-time machine learning-powered features enabled by the new hardware include Siri Shortcuts, which allows users to create and run app macros via custom Siri phrases; Memoji, a new version of Emoji that can be customized to look like you; Face ID; and Apple’s augmented reality toolkit, ARKit 2.0.
Ternus said the engineering team had to retrain the underlying Face ID neural networks to match how people use the iPad — upside down, in portrait, and in landscape.
Today’s news follows on the heels of Apple’s Core ML 2 announcement this summer.
Core ML 2 is 30 percent faster, Apple said at its Worldwide Developers Conference in June, thanks to a technique called batch prediction. Furthermore, Apple said the toolkit would let developers shrink the size of trained machine learning models by up to 75 percent through quantization.
Apple introduced Core ML in June 2017 alongside iOS 11. It allows developers to load on-device machine learning models onto an iPhone or iPad, or to convert models from frameworks like XGBoost, Keras, LibSVM, scikit-learn, and Facebook’s Caffe and Caffe2. Core ML is designed to optimize models for power efficiency, and it doesn’t require an internet connection in order to get the benefits of machine learning models.
The audio problem: Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here