Today, at Google’s annual I/O developer conference in Mountain View, California, Jeff Dean, a senior fellow in Google’s Research Group and the head of Google’s AI division, gave an overview of the company’s efforts to solve challenging scientific problems with AI and machine learning. The presentation capped off a narrative that began Tuesday with the launch of Google’s $25 million global AI impact grants program, and Google’s reveal of three ongoing accessibility projects enabled by AI technologies.
Dean framed the talk around a list of 21st-century grand challenges published by the U.S. Core of Army Engineers in 2008. Among them were pie-in-the-sky pursuits like reverse-engineering the brain, managing the nitrogen cycle, and providing energy from fusion, alongside more realistic goals, such as advancing health informatics, making solar energy more affordable, and enhancing virtual reality.
“If we make progress in all of them, the world would be a healthier place. We’d have more scientific discoveries,” said Dean.
He detailed the ongoing work of researchers at Waymo, Google parent company Alphabet’s autonomous driving division. In the 10 years since it emerged from Alphabet’s X skunkworks, Waymo’s cars racked up over 10 million real-world miles and ferried more than 1,000 paying customers from place to place in Phoenix, Arizona, site of the Waymo One ride-hailing service. That fleet has safety drivers, but Waymo has operated cars elsewhere without them.
“[We’re] on the cusp of dramatically [changing] how we train autonomous vehicles. [Cars] have to make a complex set of decisions, like what you want to do to accomplish goals,” said Dean. “It’s really thanks to deep learning algorithms that we can build an understanding of the world and have them operate in a real-world environment.”
Machine learning has endless applications in robotics, said Dean, particularly in scenarios that task robots with manipulating objects in a range of sizes and unusual shapes.
One task in particular — grasping objects that a robot has never encountered before — has seen rapid improvements. Google AI systems in 2015 and 2016 achieved a 65% grasp success rate and 78% success rate, respectively, which researchers managed to improve to 96% in 2018.
Separately, Google’s AI team has been leveraging self-supervised imitation learning, an AI training technique in which unlabeled data is used in conjunction with small amounts of labeled data to produce an improvement in learning accuracy, to “teach” robots new skills. Dean described a model that learned to pour soda from a can by “watching” human demonstrations. After 15 trials and just 15 minutes of training, it attained the pouring skills of an average eight-year-old child.
Another acute area of interest for Google is health, where AI has been instrumental in developing diagnostic tools for diseases like metastatic breast cancer. Diabetic retinopathy is another illness Google is targeting, and with good reason — it’s the fastest-growing cause of blindness among the 415 million people globally with diabetes. Alarmingly, an estimated 45% of patients suffer vision loss before diagnosis.
Diabetic neuropathy is commonly identified from retinal fundus images, which ophthalmologists grade on a sliding scale. The greater the number of hemorrhages in an image, the further the disease’s progression.
Google adopted an AI system to read these images and demonstrated in a paper published in 2016 in the Journal of the American Medical Association that the system could grade images on par with general ophthalmologists. In a subsequent study one year later, Google proposed a machine learning model that could match the performance of board-certified retinal specialist ophthalmologists.
In February, Google worked with Aravind Eye Hospital in Madurai, India to deploy a model in production.
“That’s kind of the gold standard of care,” Dean said. [W]ith good, high-quality training data, you can train a model and get the effects of retinal ophthalmologists.”
In a more recent study, Google AI scientists trained an AI system to look for other, less obvious relationships among retinal scan samples. Incredibly, it predicted factors like self-reported sex, systolic and diastolic pressure, hemoglobin, and age with high accuracy — 97% in the case of sex, and within three years of subjects’ ages.
“The same accuracy [as] a much more invasive blood test, now you can do that with retinal images. There’s a real hope this could be a new kind of thing — [when] you go to the doctor, they’ll take a picture of your eye, and we’ll have a longitudinal history of your eye and be able to learn new things from that.”
In another domain — chemistry — Google is leapfrogging conventional computation with highly efficient AI models, said Dean. One detailed in 2017 is roughly 300,000 times faster at quantum chemistry calculations, which traditionally require a far pricier — and slower — simulator.
“All of the sudden, that means you can do very different kinds of science. You can say, ‘Oh, well, I’m going to lunch, I should probably screen 100 million molecules,'” said Dean. “That might be interesting, [and] I think it’ll play out in a lot of scientific fields.”
He explained that the breakthroughs are enabled by the modern reincarnation of the neural network: a collection of trainable mathematical units organized in layers that work together to solve complicated tasks. They learn features from raw, heterogeneous, and noisy data that previously would have required extensive manual preprocessing.
Work continues on scalable architectures, like Transformers, which have an aptitude for generating human-like text, and highly compact AI systems that can run on-device, like Google’s recently released Gboard transcription model. The company now publishes close to 90 academic papers every day on the preprint server Arxiv.org, Dean said — growth that he coyly pointed out exceeds Moore’s Law.
“It’s pretty clear that machine learning is going to a big part of science and engineering,” he said. “[Our goal is to encourage the] exchange [of] machine learning model ideas and [of putting] them into practice … I think it’s a great responsibility to push forward the state of the art and apply it to different things.”