Google’s existing machine-learning infrastructure, known as DistBelief, has hitherto been used internally at Google to do things like identify and automatically label items contained within YouTube videos and photos, as well as improve speech recognition in Google apps. But DistBelief was limited, insofar as it was “narrowly targeted to neural networks, it was difficult to configure, and it was tightly coupled to Google’s internal infrastructure — making it nearly impossible to share research code externally,” according to a blog post by Jeff Dean, senior Google fellow, and Rajat Monga, technical lead.
TensorFlow is basically a second-generation machine-learning system, one that Google claims is twice as fast and more flexible, and can be run on an individual smartphone or across data centers.
“It allows us to build and train neural nets up to five times faster than our first-generation system, so we can use it to improve our products much more quickly,” explained Google CEO Sundar Pichai, in a separate blog post.
Google has been increasingly turning to artificial intelligence (AI) and deep-learning technologies to improve its myriad offerings. For example, back in October Google revealed it would use AI to improve YouTube’s video thumbnails, effectively creating the best thumbnail when users upload videos. It also uses AI to help fight Gmail spam, while Google Research is using deep-learning techniques to aid drug discovery.
Deep learning involves training systems called “artificial neural networks” with lots of data derived from various inputs, and introducing new information to the mix — there are many startups currently working on developing deep-learning techniques. Just last month, smart keyboard company SwiftKey revealed a new app that taps neural networks to improve its word prediction.
By open sourcing TensorFlow, Google is able to tap the broader developer and scientific community to vastly improve the source code, which in turn will help Google improve its own products.
“Machine learning is still in its infancy — computers today still can’t do what a 4-year-old can do effortlessly, like knowing the name of a dinosaur after seeing only a couple of examples, or understanding that ‘I saw the Grand Canyon flying to Chicago’ doesn’t mean the canyon is hurtling over the city,” said Pichai. “We have a lot of work ahead of us. But with TensorFlow we’ve got a good start, and we can all be in it together.”
Google's innovative search technologies connect millions of people around the world with information every day. Founded in 1998 by Stanford Ph.D. students Larry Page and Sergey Brin, Google ... All Google news »