VentureBeat: Oh really? So you’re not of the opinion that reinforcement learning — the sort of techniques that have led to gains seen by OpenAI, DeepMind, and others — has a long tail?
Socher: The problem with almost all of the reinforcement learning approaches is that they require simulation where they can try millions and millions of times to do a relatively simple task — like playing games. Once you have a perfect simulation of everything you need to know about a world to make good decisions in it, then yeah, you can simulate it. But the real world is not that easy to simulate.
So if, for instance, you wanted to use those kinds of algorithms on medicine, you first maybe have to let a couple billion people die before you find a useful way to make a change.
Don’t get me wrong — a lot of reinforcement learning techniques are very exciting, and it’s interesting that it’s possible at all. I don’t think there’s a reason why we will never get to AGI. But I think we still need to figure out important things like multitask learning, because we still don’t have a single model yet that can answer lots of different kinds of questions. And if we’re far away from that, we don’t need to worry about an AGI.
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Basically, we need to make sure that we don’t overhype the field — both on the positive side and the negative side. And we need to work to make sure that the public isn’t scared of basic research, because we’re nowhere close to systems that set their own goals.
VentureBeat: I’m just curious to know — because I also find this subject really interesting — which approaches you think are the most promising. Are there any AI training techniques or architectures that might lay the foundation for AGI?
Socher: I certainly hope that our research contributes to what eventually is going to become this enormous structure — a little Lego piece to a very large building.
I do think it’s clear that AGI needs to have multitask learning, and that it needs to have some interaction-level type things, and maybe even reinforcement learning-type algorithms. Clearly, it needs to learn good representations and intermediate representations of the world. And so clearly, some kind of deep learning aspect will be crucial.
It’s also clear that we need to think of new and different objective functions. We need to ask ourselves: How could we train a system that does something general but also acquires specific skills? So, for example, a lot of people think we just need to do future prediction — like next frame prediction on video, words, language, and so on. In some ways, if you could predict the next words in a sentence perfectly, you would have a perfect understanding of the world.
But people, as they grow, do more than just try to predict the future.
It’s also true that humans have certain needs and wants that AI doesn’t have — they want social connections, they want food, and so on. AI doesn’t have to grow up or evolve in a resource-constrained environment — you can just connect it to some solar panels and it’ll stay there forever.
VentureBeat: So if you had to predict whether we’ll make progress toward AGI in the next five years, what would you say? Are you optimistic about the research community’s chances?
Socher: Maybe. Of course, the AGI hype might be over by then, after people realize that we can do very useful, concrete things without it. We already have robots that wash dishes — they’re called dishwashers. And they’re perfect robots for the job, because they do what they’re asked to do. I think a much more realistic vision for the short-intermediate term is this: We have very specific tools that automate more and more complex tasks. AI has the potential to improve every industry out there, like agriculture, medicine — simple things, complex things, you name it. But you don’t need AGI to make an impact.
My main concern is that the AGI fears actually distract us from the real issues of bias, messing with elections, and the like. Look at recommender engines — when you click a conspiracy video on a platform like YouTube, it optimizes more clicks and advertiser views and shows you crazier and crazier conspiracy theories to keep you on the platform. And then you basically have people who become very radicalized, because anybody can put up this crazy stuff on YouTube, right? And so that is like a real issue. Those are the things we should be talking about a lot more, because they can mess up society and make the world less stable and less democratic.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.