Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
While the growth of deep neural networks has helped propel the field of machine learning to new heights, there’s still a long road ahead when it comes to creating artificial intelligence. That’s the message from a panel of leading machine learning and AI experts who spoke at the Association for Computing Machinery’s Turing Award Celebration conference in San Francisco today.
We’re still a long way off from human-level AI, according to Michael I. Jordan, a professor of computer science at the University of California, Berkeley. He said that applications using neural nets are essentially faking true intelligence but that their current state allows for interesting development.
“Some of these domains where we’re faking intelligence with neural nets, we’re faking it well enough that you can build a company around it,” Jordan said. “So that’s interesting, but somehow not intellectually satisfying.”
Those comments come at a time of increased hype for deep learning and artificial intelligence in general, driven by interest from major technology companies like Google, Facebook, Microsoft, and Amazon.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Fei-Fei Li, who works as the chief scientist for Google Cloud, said that she sees this as “the end of the beginning” for AI, but says there are still plenty of hurdles ahead. She identified several key areas where current systems fall short, including a lack of contextual reasoning, a lack of contextual awareness of their environment, and a lack of integrated understanding and learning.
“This kind of euphoria of AI has taken over, and [the idea that] we’ve solved most of the problem is not true,” she said.
One pressing issue identified by Raquel Urtasun, who leads Uber’s self-driving car efforts in Canada, is that the algorithms used today don’t model uncertainty very well, which can prove problematic.
“So they will tell you that there is a car there, for example, with 99 percent probability, and they will tell you the same thing whether they are wrong or not,” she said. “And most of the time they are right, but when they are wrong, this is a real issue for things like self-driving [cars].”
The panelists did concur that an artificial intelligence that could match a human is possible, however.
“I think we have at least half a dozen major breakthroughs to go before we get close to human-level AI,” said Stuart Russell, a professor of computer science and engineering at the University of California, Berkeley. “But there are very many very brilliant people working on it, and I am pretty sure that those breakthroughs are going to happen.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.