We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
The 2010s were huge for artificial intelligence, thanks to advances in deep learning, a branch of AI that has become feasible because of the growing capacity to collect, store, and process large amounts of data. Today, deep learning is not just a topic of scientific research but also a key component of many everyday applications.
But a decade’s worth of research and application has made it clear that in its current state, deep learning is not the final solution to solving the ever-elusive challenge of creating human-level AI.
What do we need to push AI to the next level? More data and larger neural networks? New deep learning algorithms? Approaches other than deep learning?
This is a topic that has been hotly debated in the AI community and was the focus of an online discussion Montreal.AI held last week. Titled “AI debate 2: Moving AI forward: An interdisciplinary approach,” the debate was attended by scientists from a range of backgrounds and disciplines.
Hybrid artificial intelligence
Cognitive scientist Gary Marcus, who cohosted the debate, reiterated some of the key shortcomings of deep learning, including excessive data requirements, low capacity for transferring knowledge to other domains, opacity, and a lack of reasoning and knowledge representation.
Marcus, who is an outspoken critic of deep learning–only approaches, published a paper in early 2020 in which he suggested a hybrid approach that combines learning algorithms with rules-based software.
Other speakers also pointed to hybrid artificial intelligence as a possible solution to the challenges deep learning faces.
“One of the key questions is to identify the building blocks of AI and how to make AI more trustworthy, explainable, and interpretable,” computer scientist Luis Lamb said.
Lamb, who is a coauthor of the book Neural-symbolic Cognitive Reasoning, proposed a foundational approach for neural-symbolic AI that is based on both logical formalization and machine learning.
“We use logic and knowledge representation to represent the reasoning process that [it] is integrated with machine learning systems so that we can also effectively reform neural learning using deep learning machinery,” Lamb said.
Inspiration from evolution
Fei-fei Li, a computer science professor at Stanford University and the former chief AI scientist at Google Cloud, underlined that in the history of evolution, vision has been one of the key catalysts for the emergence of intelligence in living beings. Likewise, work on image classification and computer vision has helped trigger the deep learning revolution of the past decade. Li is the creator of ImageNet, a dataset of millions of labeled images used to train and evaluate computer vision systems.
“As scientists, we ask ourselves, what is the next north star?” Li said. “There are more than one. I have been extremely inspired by evolution and development.”
Li pointed out that intelligence in humans and animals emerges from active perception and interaction with the world, a property that is sorely lacking in current AI systems, which rely on data curated and labeled by humans.
“There is a fundamentally critical loop between perception and actuation that drives learning, understanding, planning, and reasoning. And this loop can be better realized when our AI agent can be embodied, can dial between explorative and exploitative actions, is multi-modal, multi-task, generalizable, and oftentimes social,” she said.
At her Stanford lab, Li is currently working on building interactive agents that use perception and actuation to understand the world.
OpenAI researcher Ken Stanley also discussed lessons learned from evolution. “There are properties of evolution in nature that are just so profoundly powerful and are not explained algorithmically yet because we cannot create phenomena like what has been created in nature,” Stanley said. “Those are properties we should continue to chase and understand, and those are properties not only in evolution but also in ourselves.”
Computer scientist Richard Sutton pointed out that, for the most part, work on AI lacks a “computational theory,” a term coined by neuroscientist David Marr, who is renowned for his work on vision. Computational theory defines what goal an information processing system seeks and why it seeks that goal.
“In neuroscience, we are missing a high-level understanding of the goal and the purposes of the overall mind. It is also true in artificial intelligence — perhaps more surprisingly in AI. There’s very little computational theory in Marr’s sense in AI,” Sutton said. Sutton added that textbooks often define AI simply as “getting machines to do what people do” and most current conversations in AI, including the debate between neural networks and symbolic systems, are “about how you achieve something, as if we understood already what it is we are trying to do.”
“Reinforcement learning is the first computational theory of intelligence,” Sutton said, referring to the branch of AI in which agents are given the basic rules of an environment and left to discover ways to maximize their reward. “Reinforcement learning is explicit about the goal, about the whats and the whys. In reinforcement learning, the goal is to maximize an arbitrary reward signal. To this end, the agent has to compute a policy, a value function, and a generative model,” Sutton said.
He added that the field needs to further develop an agreed-upon computational theory of intelligence and said that reinforcement learning is currently the standout candidate, though he acknowledged that other candidates might be worth exploring.
Sutton is a pioneer of reinforcement learning and author of a seminal textbook on the topic. DeepMind, the AI lab where he works, is deeply invested in “deep reinforcement learning,” a variation of the technique that integrates neural networks into basic reinforcement learning techniques. In recent years, DeepMind has used deep reinforcement learning to master games such as Go, chess, and StarCraft 2.
While reinforcement learning bears striking similarities to the learning mechanisms in human and animal brains, it also suffers from the same challenges that plague deep learning. Reinforcement learning models require extensive training to learn the simplest things and are rigidly constrained to the narrow domain they are trained on. For the time being, developing deep reinforcement learning models requires very expensive compute resources, which makes research in the area limited to deep-pocketed companies such as Google, which owns DeepMind, and Microsoft, the quasi-owner of OpenAI.
Integrating world knowledge and common sense into AI
Computer scientist and Turing Award winner Judea Pearl, best known for his work on Bayesian networks and causal inference, stressed that AI systems need world knowledge and common sense to make the most efficient use of the data they are fed.
“I believe we should build systems which have a combination of knowledge of the world together with data,” Pearl said, adding that AI systems based only on amassing and blindly processing large volumes of data are doomed to fail.
Knowledge does not emerge from data, Pearl said. Instead, we employ the innate structures in our brains to interact with the world, and we use data to interrogate and learn from the world, as witnessed in newborns, who learn many things without being explicitly instructed.
“That kind of structure must be implemented externally to the data. Even if we succeed by some miracle to learn that structure from data, we still need to have it in the form that is communicable with human beings,” Pearl said.
University of Washington professor Yejin Choi also underlined the importance of common sense and the challenges its absence presents to current AI systems, which are focused on mapping input data to outcomes.
“We know how to solve a dataset without solving the underlying task with deep learning today,” Choi said. “That’s due to the significant difference between AI and human intelligence, especially knowledge of the world. And common sense is one of the fundamental missing pieces.”
Choi also pointed out that the space of reasoning is infinite, and reasoning itself is a generative task and very different from the categorization tasks today’s deep learning algorithms and evaluation benchmarks are suited for. “We never enumerate very much. We just reason on the fly, and this is going to be one of the key fundamental, intellectual challenges that we can think about going forward,” Choi said.
But how do we reach common sense and reasoning in AI? Choi suggests a wide range of parallel research areas, including combining symbolic and neural representations, integrating knowledge into reasoning, and constructing benchmarks that are not just categorization.
We still don’t know the full path to common sense yet, Choi said, adding, “But one thing for sure is that we cannot just get there by making the tallest building in the world taller. Therefore, GPT-4, -5, or -6 may not cut it.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.