One of the most underrated announcements at Apple’s Worldwide Developers Conference Monday was the company’s unveiling of Core ML, a programming framework designed to make it easier to run machine learning models on the company’s mobile devices.

Core ML will be part of iOS 11, which is expected to launch later this year. It allows developers to load trained machine learning models onto an iPhone or iPad and then use them for generating insights inside applications. While it was possible for developers to do that on their own in the past, the new framework is designed to make it easier for apps to process data locally using machine learning without sending user information to the cloud.

In addition, the framework is designed to optimize models for Apple’s mobile devices, which should reduce RAM use and power consumption — both important for computationally-intensive tasks like machine learning inference.

Processing machine learning data on-device provides a number of benefits. Apps don’t need an internet connection in order to get the benefits of machine learning models, and may also be able to process data faster without having to wait for information to get passed back and forth over a network. Users also get privacy benefits, since data doesn’t have to leave the device in order to benefit from intelligent results.

Apple isn’t the only company working on bringing machine learning to mobile devices. Google announced a new TensorFlow Lite programming framework at its I/O developer conference a couple weeks ago that’s supposed to make it easier for developers to build models that run on lower-powered Android devices.

Developers have to convert trained models into a special format that works with Core ML. Once that’s done, they can load the model into Apple’s Xcode development environment and deploy it to an iOS device. The company released four pre-built machine learning models based on popular open source projects, and also made a converter available so that developers can port their own.

The converter works with popular frameworks like Caffe, Keras, scikit-learn, XGBoost and LibSVM. In the event developers have a model created with a different framework that’s not supported, Apple has made it possible to write your own converter.

It’s the latest in Apple’s set of Core frameworks, which include Core Location, Core Audio and Core Image. They’re all designed to help developers create more advanced applications by abstracting out complicated tasks.

Core ML could also hold the key to Apple’s future hardware moves. The company is rumored to be working on a dedicated chip to handle machine learning tasks, and it’s possible this framework would be developers’ portal for using that silicon.