All the sessions from Transform 2021 are available on-demand now. Watch now.
In the past week, both Facebook and Microsoft used their respective developer conferences to offer a glimpse into some of the stuff they’re cooking up. Yesterday, it was Google’s turn to lift the lid on what it’s building — for developers and consumers alike.
In the buildup to Google’s I/O 2019, the internet giant announced a few things it could’ve kept for the main event — last week, for example, it revealed that it was opening Android Automotive to app developers, while earlier this week the company unveiled an Android Auto redesign. In truth, with the usual swathe of leaks, we already had a good idea of what to expect from I/O 2019.
But with all of the main announcements now out of the way, here’s a quick recap of everything Google revealed at I/O 2019.
Artificial intelligence (AI)
AI plays a central role in most tech conferences today, and I/O 2019 was no different.
Some six months after launching its $25 million AI Impact Challenge, Google revealed the winners from 12 nations who will use a Google grant of up to $2 million each to apply machine learning to fight some of the world’s biggest challenges.
Kinda sorta related to that — insofar as Google is super eager to demonstrate the benefits of AI for the greater good of society — the company unveiled three accessibility projects designed to help people with disabilities: Project Euphonia, to assist people with speech impairments; Live Relay, to help those with hearing challenges; and Project Diva, which aims to help people give Google Assistant commands without using their voice.
Of course, AI is infiltrating just about every nook and cranny of the technology industry — and Google was keen to showcase a bunch of new smarts at I/O 2019.
At last year’s event, Google unveiled a new software development kit (SDK) called ML Kit, which helps developers add AI to their mobile apps via Firebase. At this year’s event, Google gave ML Kit a bunch of new features: translation, object detection and tracking, and AutoML Vision Edge — the latter to let developers create custom-tailored image classification models for Edge TPU, ARM, and Nvidia architectures.
Elsewhere on the AI front, the company revealed that Google Assistant will soon be 10 times faster, with on-device machine learning, and shared plans to introduce the turbocharged Assistant to Google’s own Pixel phones later this year.
For voice app creators, Google announced a number of upgrades to its Actions on Google platform. Developers, for example, will now be able to tether an action to “how to” questions using the newly introduced “how-to markup language.” Google Assistant-powered apps will be better equipped to respond to commonly asked questions, such as “How do I tie a tie?” with relevant text, images, and instructional videos.
Lens, Google’s visual search and computer vision tool that’s capable of recognizing all manner of real-world objects — from plants and animals to text and celebrities — will soon be able to surface top meals in a restaurant if you simply point your smartphone camera at a menu. This will highlight informational tidbits such as online ratings and reviews.
Additionally, Google Lens will soon be able to read translated text to you if you point your camera at the printed content, and it will also be able to help you split a bill or calculate a tip after a meal.
If there was any lingering doubt as to how advanced AI is getting, Google Duplex should help settle that once and for all. Duplex, which started rolling out to mobile phones last year, is a verbal chat agent that can make appointments for you over the phone. At I/O 2019, Google announced that it is expanding Duplex to the web, where it will be able to handle things like car rental bookings.
Finally, Google’s cloud unit announced that it’s making pods with 1,000 TPU chips available in public beta. Google has for some time developed its own tensor processing units (TPU) — programmable, custom chips designed to power extreme machine learning tasks — and researchers and developers can use them to train AI models.
Alongside this launch, Google debuted a new feature for Android Q called Live Caption, which provides real-time continuous speech transcription on your phone — this means that songs, podcasts, phone calls, video calls, and recordings can all be instantly captioned.
In the broader Android sphere, Google announced that Android has now passed 2.5 billion monthly active devices, and the company finally updated its Android distribution board. After six months with no updates, it now shows that Android Pie (the most recent version of Android) has passed the 10% adoption mark.
Google may have started out as a software company, but it is now very much embedded in the hardware realm. At I/O 2019, the Mountain View-based company sought to improve sales for its Pixel-branded phone lineup with the addition of two more affordable mid-range devices — the Pixel 3a and Pixel 3a XL.
Elsewhere, Google also introduced a new Google Assistant smart device for the home. Priced at $229, the Nest Hub Max is a 10-inch smart display and video camera, and it will go on sale later this summer.
Augmented reality (AR)
After first demoing the feature last year, Google finally revealed that a really neat new AR feature is arriving in preview for Google Maps on some Pixel phones this week. The “heads-up” mode serves up directions via a phone’s cameras in real time.
Google’s AR announcements didn’t stop there. ARCore, which is Google’s SDK for AR app development, already offers an Augmented Images API that allows users to point their cameras at static 2D images and bring them to life. The API will now enable apps to track both moving images and multiple images simultaneously. Similarly, a new Environment HDR mode will harness AI to mimic real-world lighting in digital objects.
Google Search also got some love yesterday at I/O 2019. Navigable 3D AR models will be arriving in Google’s omnipresent mobile search engine, so if you search for something specific — such as a Great White Shark — you’ll be able to learn about the subject not only through reading or watching videos, but also in 3D AR.
Elsewhere on Search, Google also announced a news recommendation tool called Full Coverage, in addition to a fresh podcast tool that allows users to search for podcasts and save episodes to listen to on other devices.
Privacy and security
Privacy is never far from the public debate these days, and Google used I/O 2019 to debut a handful of new privacy-focused features. Incognito mode, which you may already be familiar with from Chrome, is soon coming to Google Maps and will arrive on YouTube and Google Search later in the year.
As for Chrome, the web’s most used cross-platform browser, Google announced plans to protect users from cross-site cookies and “fingerprinting,” though didn’t divulge exactly when it would roll out these changes, beyond saying “later this year.” The company also mentioned an open source browser extension for ads, which will highlight the names of all the companies “that we know were involved in the process that resulted in an ad.”
Last month, Google announced a new service that allows any Android phone, running Android 7.0 Nougat and higher, to double as a Fast Identity Online (FIDO) security key to prevent phishing attacks. This appears to have only been launched in preview until now, because at I/O 2019 Google announced that it is only now generally available to everyone.
Google debuted a bunch of other tools and services for developers at I/O 2019. These included bringing Firebase performance monitoring to web apps; expanding its Flutter mobile app SDK to the web, desktop, and embedded devices; adding 10 new libraries to Android Jetpack; and introducing a new Kotlin toolkit for UI development.