At Apple’s Worldwide Developers Conference 2018, the Cupertino company announced Core ML 2, a new version of its machine learning software development kit (SDK) for iOS devices. But it’s not the only game in town — just a few months ago, Google announced ML Kit, a cross-platform AI SDK for both iOS and Android devices. Both toolkits aim to ease the development burden of optimizing large AI models and datasets for mobile apps. So how are they different?

Core ML

Apple’s Core ML debuted in June 2017 as a no-frills way for developers to integrate trained machine learning models into their iOS, macOS, and tvOS apps; trained models are loaded into Apple’s Xcode development environment and packaged in an app bundle. Core ML 2 is much the same, but more efficient. Apple says it’s 30 percent faster, thanks to batch prediction, and that it can shrink the size of models by up to 75 percent with quantization.

Still, it’s not perfect. Unlike Google’s ML Kit, it isn’t cross-platform (it doesn’t support Android) and although it is possible to download models, features like versioning require a third-party service like IBM’s Watson Studio. (Developers. of course, can test different machine learning models by using Apple’s TestFlight feature.)

The newest version of Core ML supports 16-bit Floating Point and all level of quantization including down to 1 bit, which can greatly reduce the size of AI models. It can update models from a cloud service like Amazon Web Services (AWS) or Microsoft’s Azure at runtime, and it ships with a converter that works with Facebook’s Caffe and Caffe2, Keras, scikit-learn, XGBoost, LibSVM, and Google’s TensorFlow Lite. (Developers can create custom converters for frameworks that aren’t supported.)

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!


Learn More

Apple touts its privacy benefits (apps don’t need to pass data over a network), and it says that Core ML is optimized for power efficiency.

New this year is Create ML. It’s a new GPU-accelerated tool for native AI model training on Mac computers that supports vision and natural language. And because it’s coded in Swift, developers can use drag-and-drop programming interfaces like Xcode Playgrounds to train models.

For developers seeking an off-the-shelf solution that doesn’t require training, there’s Apple’s Vision API and Natural Language Framework, which makes it easy to build apps with on-device face detection, barcode scanning, text analysis, name density recognition, and other features without having to worry about crafting an algorithm.

ML Kit

At its I/O 2018 developer conference in May, Google introduced ML Kit, a cross-platform suite of machine learning tools for its Firebase mobile development platform. ML Kit uses the Neural Network API on Android devices and is designed to compress and optimize machine learning models for mobile devices.

ML Kit leverages the power of Google Cloud Platform’s machine learning technology for “enhanced” accuracy. Google’s on-device image labeling service, for example, features about 400 labels, while its cloud-based version has more than 10,000.

It also offers a couple of easy-to-use APIs for basic use cases: text recognition, face detection, barcode scanning, image labeling, and landmark recognition. Google says that new APIs, including a smart reply API that supports in-app contextual messaging replies and an enhanced face detection API with high-density face contours, will arrive in late 2018.

Custom models trained with TensorFlow Lite, Google’s lightweight offline machine learning framework for mobile devices, can be deployed with ML Kit via the Firebase console, which serves them at app runtime. (Google says it’s also working on a compression tool that converts full TensorFlow models into TensorFlow Lite models.) Developers have the option of decoupling machine learning models from apps and serving them at runtime, shaving megabytes off of app install sizes and ensuring models always remain up to date.

Finally, ML Kit works with Firebase features like A/B testing, which lets users test different machine learning models dynamically, and Cloud Firestore, which stores image labels and other data.

Which is better?

So which machine learning framework has the upper hand? Neither, really.

Core ML 2 doesn’t support Android, of course, and developers familiar with Google’s Firebase are likely to prefer ML Kit. Likewise, longtime Xcode users will probably tend toward Core ML 2.

Perhaps the biggest difference between the two is first-party plug-and-play support: Google provides a wealth of prebuilt machine learning models and APIs from which to choose, including APIs for contextual message replies and bar code scanning. Apple, on the other hand, is a little more hands-off.

As in many things, choosing between ML Core 2 and ML Kit is mostly a matter of personal preference — and whether the developer in question prefers a top-to-bottom solution like Firebase or a piecemeal solution like Core ML, Create ML, and Apple’s various machine learning APIs,

Update on June 6: We’ve amended the ML Core 2 section to reflect that it supports machine learning model downloads and quantization, doesn’t ship with prebuilt models, and can compress models into smaller packages. 

Read Apple WWDC 2018 Stories Here