We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
In May during its Google I/O 2021 developer conference, Google demoed multitask unified model (MUM), a system trained on 75 languages at once that can simultaneously understand different forms of information including text, images, and videos. Today, Google revealed that it’s using MUM to identify variations in the names of COVID-19 vaccines across multiple languages, which the company claims has improved Google Search’s ability to surface information about COVID-19 vaccines for users around the world.
As Google notes, the COVID-19 vaccines released to date — including those from AstraZeneca, Moderna, and Pfizer — go by different names depending on the country and region of origin. There are roughly hundreds of COVID-19 vaccine names globally, not all of which have historically risen to the top of Search when users would type in phrases like “new virus vaccines,” “mrna vaccines,” and “AZD1222.”
MUM, which can transfer knowledge between languages and doesn’t need to be explicitly taught how to complete certain tasks, helped Google engineers to identify more than 800 COVID-19 name variations in over 50 languages, according to Google Search VP Pandu Nayak. With only a few examples of “official” vaccine names, MUM was able to find interlingual variations “in seconds” compared with the weeks it might take a human team.
“This first application of MUM has helped to provide users around the world with important information in a timely manner,” Nayak said in a blog post translated from Japanese. “We look forward to making search more convenient through the use of MUM in the future. Early testing has shown that MUM not only improves existing systems, but also helps develop new methods of information retrieval.”
Google previously applied AI to the problem of providing projections of COVID-19 cases, deaths, ICU utilization, ventilator availability, and other metrics useful to policymakers and health care workers. In August 2020, in partnership with Harvard, the company released models that forecast COVID-19–related developments over the next 14 days for U.S. counties and states.
MUM has potential beyond vaccine name identification, particularly in situations where it can lean on context and more in imagery and dialogue turns. For example, given a photo of hiking boots and asked “Can I use this to hike Mount Fuji?”, MUM can comprehend the content of the image and the intent behind the query, letting the questioner know that hiking boots would be appropriate and pointing them toward a lesson in a Mount Fuji blog.
MUM can also understand questions like “I want to hike to Mount Fuji next fall, what should I do to prepare?” Because of its multimodal capabilities, MUM realizes that “prepare” could encompass things like fitness training as well as weather. The model, then, could recommend that the questioner bring a waterproof jacket and give pointers to go deeper on topics with relevant content from articles, videos, and images across the web.
“We’re in the early days of exploiting this new technology,” Prabhakar Raghavan, senior VP at Google, said onstage at Google I/O. “We’re excited about its potential to solve more complex questions, no matter how you ask … MUM is changing the game with its language understanding capabilities.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.