Missed the GamesBeat Summit excitement? Don't worry! Tune in now to catch all of the live and virtual sessions here.
Richard Socher gets around. He’s the founder of MetaMind, an artificial intelligence (AI) startup that raised more than $8 million in venture capital backing from Khosla Ventures and others before being acquired by Salesforce in 2016, and he previously served as adjunct professor at Stanford’s computer science department, where he also received his Ph.D. (He earned his bachelor’s degree at Leipzig University and his master’s at Saarland University.) In 2007, Socher was part of the team that won first place in the semantic robot vision challenge. And he was instrumental in assembling ImageNet, a publicly available database of annotated images used to test, train, and validate computer vision models.
Socher — who’s now Saleforce’s chief scientist — has long been attracted to the field of natural language processing, a subfield of computer science concerned with interactions between computers and human languages. His dissertation demonstrated that deep learning — layered mathematical functions loosely modeled on neurons in the human brain — could solve several different natural language processing tasks simultaneously, obviating the need to develop multiple models. At MetaMind in 2014, using some of the same theoretical principles, he and a team of engineers to produce a model that achieved state-of-the-art accuracy on ImageNet.
It’s no wonder that in 2017, the World Economic Forum called him “one of the prodigies of the artificial intelligence and deep learning space whose breakthrough technologies are transforming natural language processing and computer vision.”

Above: Richard Socher.
At Salesforce, Socher manages a team of researchers that actively publishes papers on question answering, computer vision, image captioning, and other core AI areas, and once a year co-teaches Stanford’s graduate-level Natural Language Processing with Deep Learning course. At the NeurIPS 2018 conference in Montreal last week, he graciously volunteered his time to speak with VentureBeat about AI systems as they exist today, Salesforce’s role in the research community, and the progress (or lack thereof) that’s been made toward artificial general intelligence — i.e., humanlike AI.
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Here’s an edited transcript of our interview.
VentureBeat: It’s been a busy year for Salesforce. Einstein, last I saw, is powering something like 4 billion predictions, up from a billion earlier this year, and it’s slowly becoming a part of almost every product in your portfolio. And it wasn’t that long ago you announced the Einstein Voice platform and made Einstein bots for business generally available. So maybe we can start there.
Conversational systems are increasingly becoming a part of consumers’ lives. Clearly, Salesforce sees them as a really important part of your business. So what does the future look like?
Richard Socher: I think it’s important for us to be a Switzerland, if you will, with regard to a lot of these AI efforts, because our customers are in a lot of different places. At Salesforce, we think not only about our customers’ needs, but about their customers’ needs in a B2C capacity. That’s why we try to support all of these different frameworks and platforms — like Alexa and Google Home, for example.
At the same time, there are a lot of enterprise-specific requirements that we want to fulfill, so it also makes sense for us to build our own [solutions] in areas where we have a lot of strength. For instance, service is something that we know very well in the enterprise world. We’re trying to empower all our customers — 150,000-plus companies — to benefit from AI the same way these very large companies with multi-billion dollar R&D budgets benefit from it. That’s why I’m excited about this platform mindset that we have. We’re really trying to democratize these technologies.
It turns out that large companies want a service, and they want to pay for it to have SLAs and to have service-level agreements, uptime guarantees, support, and all of that. Just having some open-sourced code laying around somewhere isn’t really that useful. To actually democratize AI for a lot of companies of various sizes, you have to make it available as a service. Of course, we first start with kind of the package apps we think are the most useful directly, so our customers don’t have to fiddle around with anything. But we also want to make it easy enough for admins to create their own AI features.
VentureBeat: You just mentioned some of the challenges involved in open-sourcing your technologies. Is compliance one of those?
Socher: It’s interesting you mention that. We have bank customers who saw the first version of our Einstein Voice system, which uses a consumer API, and several of them said they couldn’t use it because of [the API].
One of the models in the larger speech system that we use is a language model that tries to predict the next word in a sentence, to help autocomplete things. Now, if you take data from a customer and they say, “Company X is acquiring Company Y,” that becomes a part of the training data. The trouble is, it’s very sensitive information, and if it’s inadvertently displayed somehow to a user through autocorrect, that’s obviously a bad thing.
What I’m saying is, I can see why banks and other enterprises like insurance with very private data don’t want to necessarily use consumer APIs, where their data becomes part of larger pools. Each company has its own lingo and private data, and it’s important for them to feel like they have control over the kind of vocabulary they want to use in their speech recognition systems.
VentureBeat: Not to harp too much on their compliance thing, and I don’t mean to suggest there’s an easy solution, but have you been paying attention to developments on the encryption front, like Intel’s HE-Transformer? I’m talking about AI systems that train on encrypted data. Do you think that might be an area worth further investigating?
Socher: I actually love that. It’s kind of interesting — there are two competing thoughts here. On the one hand, AI, you might say, is already hard enough. Why should we make it even harder by encrypting data? After all, the brain doesn’t first encrypt and then try to access it.
But you can also argue that we need privacy right now. Trust is our number one value at Salesforce. We want to make these systems better, and maybe you make them slightly worse by encrypting the data. But then, you could do more data sharing, and maybe, as a result, produce a system that does better in the end.
VentureBeat: You hinted at transparency and privacy just now, both of which I’m sure are important areas for Salesforce. Your customers, of course, don’t want their data compromised, and they want to understand what’s happening under the hoods — like how AI is arriving at its conclusions. So what’s the work you’re doing there — what does it look like today?
Socher: You touched upon a couple different things — all of which are very important to us.
Interpretability and opening up the “black box” is going to become a more important research area, but it’s on the spectrum — nobody asks why an object detection algorithm for consumer packaged goods (CPG) classified, for example, a can of Spam as a can of beans. They don’t ask why it classified something as this versus that because the behavior doesn’t change — they just want to automate a certain process, and they expect it to be in a ballpark range and have some error bars on it. And so interperability, for people in some industries, is less important.
Now, if you were to make a vision classification system in medicine, you’d want to know why it’s telling you that you need to get major surgery. Even within the field of computer vision specifically, there’s a spectrum — you don’t necessarily care about some things as much as other things in terms of the interpretability of the algorithms.
VentureBeat: In that health care example you just brought up, you’d also probably want to know a bit about the datasets that were used to train the AI model, right?
Socher: Yes. The datasets come in on the fairness question. Of course, you want to know if an AI system has been trained on people from a certain ethnicity, or certain age groups, and so on. There’s a lot of complexity in the training data when considering bias and fairness. But I think it’s also an interesting angle for interpretability — like finding training examples that most closely resemble a test case and are most likely what led the algorithm to make a certain kind of decision.
On the other end of the spectrum, we have machine learning classifiers where we ask humans to change their behavior. For instance, Salesforce offers lead opportunity scoring for salespeople. A salesperson on any given day might call, like, 5,000 people, and we try to answer the question: Who should they really call? Our tools provide a ranked list. But when we tell salespeople this, a lot of them feel that we’re telling them how to do their job. So you have to give them reasons why one call is the right one or is ranked higher than others.
In some cases, then, interpretability is extremely crucial, because the AI feature will not see adoption if it doesn’t have that aspect to it. In other cases, there is an unfortunate discrepancy between the most interpretive algorithms and the most accurate.
It’s going to be an interesting ethical question. We make trade-offs as a society, but I that at some point we have to say, OK, these autonomous systems aren’t perfect. Self-driving cars could save like 10,000 lives a year, but you might have five or a dozen different algorithms that are responsible for having killed like 5,000 people a year because they weren’t 100 percent accurate. Neither are humans, though — humans are even worse from an accuracy standpoint. It’s a really interesting, sociological, philosophical, ethical, kind of question for us to ask now.
VentureBeat: It kind of gets at the question of algorithmic fairness, right?
Socher: That’s true. On the fairness side, Salesforce thinks about that a lot — again, we are a platform company, and we release tools for other companies to build their own AI.
One of the features that we announced at Dreamforce last year was Einstein Prediction Builder, where, based on any set of columns, you can predict another column. That sounds kind of boring, but it’s close to 80 percent of enterprise and business machine learning applications. These are predictions like: Will this person pay their loan back? Should they get a mortgage? We had to be very careful, because we don’t know what kinds of columns are going in there as input. Someone could build a racist or sexist kind of classifier.
I’m very excited that this feature comes required with a Salesforce Trailhead on ethical AI — to at least create awareness for admins who are building these kinds of systems to think about what potential issues there could be in their datasets and what kinds of biases might be in the training data. You don’t want to have a loan or credit classifier that takes into account gender and, like, doesn’t give a woman startup money for their company because it hasn’t seen in the training dataset as many women starting companies.
There’s no silver bullet in this in the space. There are some interesting and complex algorithms ideas if you know which classes or columns of people you want to protect — you can try to make sure that the bias in the data doesn’t get further amplified in the algorithm. But it’s an ongoing conversation that has to happen. Kathy Baxter, the architect of Ethical AI Practice at Salesforce, has this really great saying: Ethics is a mindset, not a checklist. Now, it doesn’t mean you can’t have any checklists, but it means you do need to always think about the broader applications as it touches human life and informs important decisions.
VentureBeat: Where does regulation come into play? Microsoft’s Brad Smith recently called on Congress and tech leaders to prevent misuses of facial recognition. What’s Salesforce’s position?
Socher: On the subject of regulation, we have to acknowledge that it doesn’t make sense to regulate all of AI — it makes sense to regulate AI applied to certain human endeavors, like self-driving cars drugs, drug discovery, radiology, and pathology. It’s a pretty complex space and it’s hard to figure out where to draw the line — to figure out where we are imposing our viewpoints culturally, politically, and technologically
There’s a lot of things that can go wrong with respect to facial recognition, for sure. There’s a lot of really bad research, sometimes even from good universities — like thinking you can classify whether somebody is gay or not from a photo of their face. As soon as you start making important decisions based on facial recognition, you can do some terrible things — like using it and AI software to make judgments in the judicial system. It’s obviously going to be biased.
So hopefully, yes, there will be regulations against that application of the technology.
There’s a silver lining, of course, which is it’s easier to change one algorithm to make certain decisions than it is to change, for example, 10,000 store managers who don’t promote woman as often at a supermarket chain or something like that. But it will require research and analysis and interpretability, fairness, the right datasets, and all of that together to make sure that positive part of the future can actually happen.
VentureBeat: Do you think part of the problem is that we have unrealistic expectations of these AI systems? Is it that the public isn’t aware of their limitations?
Socher: The marketing team and I worked together and making sure we’re not talking about the brain. And, you know, like, human AGI and all of that, which is exciting, but it’s still science fiction.
This is a little tangential, but I don’t think there’s a credible research path towards AGI — like, we don’t even have the missing pieces.
VentureBeat: Oh really? So you’re not of the opinion that reinforcement learning — the sort of techniques that have led to gains seen by OpenAI, DeepMind, and others — has a long tail?
Socher: The problem with almost all of the reinforcement learning approaches is that they require simulation where they can try millions and millions of times to do a relatively simple task — like playing games. Once you have a perfect simulation of everything you need to know about a world to make good decisions in it, then yeah, you can simulate it. But the real world is not that easy to simulate.
So if, for instance, you wanted to use those kinds of algorithms on medicine, you first maybe have to let a couple billion people die before you find a useful way to make a change.
Don’t get me wrong — a lot of reinforcement learning techniques are very exciting, and it’s interesting that it’s possible at all. I don’t think there’s a reason why we will never get to AGI. But I think we still need to figure out important things like multitask learning, because we still don’t have a single model yet that can answer lots of different kinds of questions. And if we’re far away from that, we don’t need to worry about an AGI.
Basically, we need to make sure that we don’t overhype the field — both on the positive side and the negative side. And we need to work to make sure that the public isn’t scared of basic research, because we’re nowhere close to systems that set their own goals.
VentureBeat: I’m just curious to know — because I also find this subject really interesting — which approaches you think are the most promising. Are there any AI training techniques or architectures that might lay the foundation for AGI?
Socher: I certainly hope that our research contributes to what eventually is going to become this enormous structure — a little Lego piece to a very large building.
I do think it’s clear that AGI needs to have multitask learning, and that it needs to have some interaction-level type things, and maybe even reinforcement learning-type algorithms. Clearly, it needs to learn good representations and intermediate representations of the world. And so clearly, some kind of deep learning aspect will be crucial.
It’s also clear that we need to think of new and different objective functions. We need to ask ourselves: How could we train a system that does something general but also acquires specific skills? So, for example, a lot of people think we just need to do future prediction — like next frame prediction on video, words, language, and so on. In some ways, if you could predict the next words in a sentence perfectly, you would have a perfect understanding of the world.
But people, as they grow, do more than just try to predict the future.
It’s also true that humans have certain needs and wants that AI doesn’t have — they want social connections, they want food, and so on. AI doesn’t have to grow up or evolve in a resource-constrained environment — you can just connect it to some solar panels and it’ll stay there forever.
VentureBeat: So if you had to predict whether we’ll make progress toward AGI in the next five years, what would you say? Are you optimistic about the research community’s chances?
Socher: Maybe. Of course, the AGI hype might be over by then, after people realize that we can do very useful, concrete things without it. We already have robots that wash dishes — they’re called dishwashers. And they’re perfect robots for the job, because they do what they’re asked to do. I think a much more realistic vision for the short-intermediate term is this: We have very specific tools that automate more and more complex tasks. AI has the potential to improve every industry out there, like agriculture, medicine — simple things, complex things, you name it. But you don’t need AGI to make an impact.
My main concern is that the AGI fears actually distract us from the real issues of bias, messing with elections, and the like. Look at recommender engines — when you click a conspiracy video on a platform like YouTube, it optimizes more clicks and advertiser views and shows you crazier and crazier conspiracy theories to keep you on the platform. And then you basically have people who become very radicalized, because anybody can put up this crazy stuff on YouTube, right? And so that is like a real issue. Those are the things we should be talking about a lot more, because they can mess up society and make the world less stable and less democratic.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.