Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Yesterday marked the conclusion of Google’s I/O 2019 developer conference in San Francisco, where the company announced updates across its product portfolio. Among the highlights were the latest beta release of Android Q, Google’s cross-hardware operating system; the Pixel 3a and Pixel 3a XL; the Nest Hub Max; augmented reality in Google Search; Duplex on the web; enhanced walking directions in Google Maps; and far more than can be recounted here.
It’s not easy to craft a narrative that touches on hundreds (if not thousands) of platforms and services, but Google made an effort to bring privacy to the fore this week — particularly in the area of artificial intelligence and machine learning.
The company detailed its work in federated learning, a distributed AI approach that facilitates model training by aggregating samples that are sent to the cloud for processing only after they’ve been anonymized and encrypted. (Google says its Gboard keyboard for Android and iOS is already using federated learning to improve next-word and emoji prediction across “tens of millions” of devices.) It dovetails, the company said, with recent privacy-focused improvements in its open source machine learning framework, TensorFlow.
During the TensorFlow Developer Summit in March, Google announced TensorFlow Federated (TFF), a module that facilitates the deployment and training of AI systems using data from multiple local, separate devices. During that same conference, Google debuted TensorFlow Privacy, a library for its TensorFlow machine learning framework that is intended to make it easier to train AI models with strong privacy guarantees.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Separately, on the second day of I/O, Google published a list of privacy commitments regarding its hardware products in which it detailed how personal data is used and how it can be controlled. The document notes, for instance, that the new camera-touting Nest Hub Max, which leverages an on-device facial recognition feature dubbed Face Match to spot familiar people and surface contextually relevant information, doesn’t send facial recognition data to the cloud.
This week, Google also unveiled an improved Google Assistant that can perform tasks more quickly and that doesn’t require repeated triggering with a hotword (e.g., “Hey Google”). The company said that because the new speech recognition model is far smaller than that of the current version — half a gigabyte now compared with roughly 100 gigabytes — it’s able to complete tasks like transcription, file searches, and selfie-snapping offline, without an internet connection.
“On-device machine learning powers everything from these incredible breakthroughs like Live Captions to helpful everyday features like Smart Reply,” explained Google’s senior director of Android, Stephanie Cuthbertson. “And it does this with no user input ever leaving the phone, all of which protects user privacy.”
These, of course, weren’t the only privacy-related announcements Google made onstage and in the weeks leading up to I/O. The company also rolled out a setting that will let users delete location data automatically. It revealed that Chrome will implement policies that make cookies more private, alongside anti-fingerprinting technology that will prevent ad networks from tracking users’ behavior without their consent. And it teased Incognito Mode for Google Maps, a setting that, when enabled, won’t associate places users have searched for and navigated to with their accounts.
Google’s recommitment to privacy comes as skepticism toward the tech industry’s handling of personal data reaches an all-time high. About 91% of Americans say they’ve lost control over how their personal information is collected and used, according to the Pew Research Center, and 89% of people think companies should be more transparent about how their products use data.
The cynicism isn’t all that surprising, given events like the Facebook and Cambridge Analytica data scandal. And Google is not immune: A Wall Street Journal report last summer revealed that Google+, Google’s now-shuttered social network, failed to disclose an exploit that might have exposed the data of more than 500,000 users.
With this week’s announcements, Google is betting that a privacy-forward approach to AI — and to its products and services more broadly — will keep it in the good graces of its billions-strong userbase. Time will tell.
For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.
Thanks for reading,
AI Staff Writer