A new GamesBeat event is around the corner! Learn more about what comes next.
Apple announced Core ML 2, a new version of its suite of machine learning apps for iOS devices, at the Worldwide Developers Conference (WWDC) 2018 in San Jose, California today.
Core ML 2 is 30 percent faster, Apple says, thanks to a technique called batch prediction. Furthermore, Apple said the toolkit will let developers shrink the size of trained machine learning models by up to 75 percent through quantization.
Apple also announced Create ML, a new GPU-accelerated tool for native AI model training on Macs. The tool supports vision and natural language, as well as custom data. And because it’s built in Swift, you can use drag-and-drop programming interfaces like Xcode Playgrounds to train models. “It’s really easy to use,” Apple senior vice president of software engineer Craig Federighi said onstage.
Federighi explained that it used to take one developer, Memrise, 24 hours to train a model with 20,000 images, but that Create ML reduced the training time for same model to 48 minutes on a MacBook Pro and 18 minutes on an iMac Pro. Create ML also reduced the size of the model from 90 MB to 3 MB.
Apple introduced Core ML in June 2017 with the launch of iOS 11. It allows developers to load on-device machine learning models onto an iPhone or iPad, or to convert models from frameworks like XGBoost, Keras, LibSVM, scikit-learn, and Facebook’s Caffe and Caffe2. Core ML is designed to optimize models for power efficiency, and it doesn’t require an internet connection in order to get the benefits of machine learning models.
News of Core ML’s update comes hot on the heels of ML Kit, a machine learning software development kit for Android and iOS that Google announced at its I/O 2018 developer conference in May. In December 2017, Google released a tool that converts AI models produced using TensorFlow Lite, its machine learning framework, into a file type compatible with Apple’s Core ML.
Core ML is expected to play a key role in Apple’s future hardware products. The company is reportedly developing a chip — the Apple Neural Engine, or ANE — to accelerate computer vision, speech recognition, facial recognition, and other forms of artificial intelligence, and plans to include it in upcoming devices. It will offer third-party developers access to the chip in order to run their own AI, according to Bloomberg.
In a hint at the company’s ambitions, Apple hired John Giannandrea, a former Google engineer who oversaw the implementation of AI-powered features in Gmail, Google Search, and the Google Assistant, to head up its machine learning and AI strategy. And it is looking to hire more than 150 people to staff its Siri team.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more