Google unveiled its artificial intelligence software this summer that could recognize faces of cats, people and other things by training on YouTube videos. The technology is now being used to improve the results for Google’s products, such as speech recognition for Google Voice.
Google’s neural network, which processes data similar to the way the brain works and learns, is based on simulating groups of connected brain cells that communicate with each other. When it absorbs data, the neural network becomes better at processing it and recognizing relationships among the data. That’s what we call learning.
Neural networks have been used for decades in face detection or chess-playing software. But Google has a lot more computing power than anyone else, thanks to all of the data centers it has for processing search requests. Google is now using neural networks recognize speech better. That’s increasingly important for Android, the mobile operating system that competes with Apple’s iOS. Vincent Vanhoucke, leader of Google’s search recognition efforts, told Technology Review that results have been improved 20 percent to 25 percent for speech recognition. Other Google products could benefit too.
Google researchers say they’re not building a biological brain yet. But maybe one of these days….
More: MobileBeat 2016 is focused on the paradigm shift from apps to AI, messaging, and chatbots. Don't miss this opportunity: July 12 and 13 in San Francisco.