Missed the GamesBeat Summit excitement? Don't worry! Tune in now to catch all of the live and virtual sessions here.
University of Toronto faculty member, Google Brain researcher, and recent Turing Award recipient Geoffrey Hinton spoke this afternoon during a fireside chat at Google’s I/O developer conference in Mountain View. He discussed the origin of neural networks — layers of mathematical functions modeled after biological neurons — and the feasibility and implications of AI that might someday reason like a human.
“It seems to me that there is no other way the brain could work,” said Hinton of neural networks. “[Humans] are neural nets — anything we can do they can do … better than [they have] any right to.”
Hinton, who has spent the past 30 years tackling a few of AI’s biggest challenges, has been referred to by some as the “Godfather of AI.” In addition to his seminal work in machine learning, he has authored or coauthored over 200 peer-reviewed papers, including a 1986 paper on a machine learning technique called backpropagation.
Hinton popularized the idea of deep neural networks, or AI models containing the above-mentioned functions arranged in interconnected layers that transmit “signals” and adjust the synaptic strength (weights) of connections. In this way, they extract features from input data and learn to make predictions.
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Deep neural networks were refined in Transformers, which Google researchers detailed in a blog post and accompanying paper (“Attention Is All You Need”) two years ago. Thanks to attention mechanisms, which calculate the weightings dynamically, Transformers can outperform state-of-the-art models in language translation tasks while requiring less computation to train.
Hinton admitted that the pace of innovation has surprised even him. “[I wouldn’t have expected] in 2012 that in the next five years we’d be able to translate between many languages using […] the same technology,” he said.
That said, Hinton believes that current AI and machine learning approaches have their limitations. He pointed out that most computer vision models don’t have feedback mechanisms — that is, they don’t try to reconstruct data from higher-level representations. Instead, they try to learn features discriminatively by changing the weights.
“They’re not doing things like at each level of feature detectors checking that they can reconstruct the data below,” said Hinton.
He and colleagues recently turned to the human visual cortex for inspiration. Human vision takes a reconstruction approach to learning, said Hinton, and as it turns out, reconstruction techniques in computer vision systems increase their resistance to adversarial attacks.
“Brain scientists all agreed on the idea that, if you have two areas of the cortex in a perceptual pathway and connections from one to the other, there will always be a backward pathway,” said Hinton.
To be clear, Hinton thinks that neuroscientists have much to learn from AI researchers. In fact, he believes that AI systems of the future will mostly be of the unsupervised variety. Unsupervised learning — a branch of machine learning that gleans knowledge from unlabeled, unclassified, and uncategorized test data — is almost humanlike in its ability to learn commonalities and react to their presence or absence, he says.
“If you take a system with billions of parameters, and you do stochastic gradient descent in some objective function, it works much better than you’d expect … The bigger you scale things, the better it works,” he said. “That makes it far more plausible that the brain is computing the gradient of some objective function and updating the strength of synapses to follow that gradient. We just have to figure out how it gets the gradient and what the objective function is.”
This might even unlock the great mystery of dreams. “Why is it that we don’t remember our dreams at all?” Hinton asked the crowd rhetorically.
He thinks it might have something to do with “unlearning,” which he explained in a theory put forward in a coauthored paper about Boltzmann machines. These AI systems — networks of symmetrically connected and neuron-like units that make stochastic decisions about whether to be “on” or “off” — tend to “find … observed data less surprising.”
“The whole point of dreaming [might be] so you put the whole learning process in reverse,” said Hinton.
Hinton believes these learnings could transform entire fields, like education. For instance, he anticipates far more personalized, targeted courses that take into account human biochemistry.
“You’d have thought that if we really understand what’s going on we should be able to make things like education better, and I think we will,” he said. “It would be very odd if you can finally understand what’s going on [in the brain] and how it learns and not adapt the environment so you can learn better.”
He cautions that this will take time. In the nearer term, Hinton envisions a future of intelligent assistants — like Google Assistant or Amazon’s Alexa — that interact with users and guide them in their daily lives.
Hinon’s predictions come after a recent speech by Eric Schmidt, former executive chair of Google and Alphabet. Schmidt similarly believes that in the future, personalized AI assistants will use knowledge of our behaviors to keep us informed.
“In a couple of years, I’m not sure we’ll learn much. But if you look at it, assistants are pretty smart now, and once assistants can really understand conversations, assistants can have conversations with kids and educate them,” Hinton concluded.