In the past few years, artificial intelligence has come into its own, and lots of companies are grafting it onto their core businesses, marrying AI with search, ecommerce, social networking, cybersecurity — you name it. But what if those businesses had started out in an age of AI and had integrated it into their products from the very beginning?
Peter Relan addressed this speculative question for us at our MobileBeat 2017 conference this week. Relan is a well-known entrepreneur who started the YouWeb incubator, which spawned startups such as mobile gaming companies OpenFeint and CrowdStar. Now he’s CEO of Got It and an investor in the popular gaming chat app Discord.
Relan’s Got It is a new kind of search engine, and it uses AI to locate human experts who can answer your questions in a personalized way. He thinks this will yield better results, and it’s an example of the kind of business that is better because it was born in the AI boom.
Here’s an edited transcript of our interview.
VentureBeat: What if Google, Amazon, and Facebook had started with AI algorithms a long time ago, before they got hip to this subject more recently? Peter, why don’t you talk about that for us?
Peter Relan: The last speaker mentioned briefly that AI had gone through a popular phase in the ‘80s. I was in college at the time. The AI hype that started in the 80s continued into the ‘90s a bit, but by the 2000s we were much more focused on web 2.0, social commerce, and so on. You had this slew of companies, major tech giants, starting in that era without AI in their core spectrum.
VB: They were born in the AI bust.
Relan: Exactly, if we can imagine such a thing. I found it very curious, looking back over the last 35 years. Today, if you’re a startup like the ones in my portfolio, involving AI wouldn’t even be a question in your strategy. You would start with AI as a key part of what you do. So I chose a few companies to look at very closely and consider what they would be like if they’d started with AI as something core to what they do.
I can start by talking about Facebook. A lot of people here know this, but it’s a very broad, white-page company. Facebook’s content is generated by its community of users. If Facebook had started with AI, the number one problem it would solve is, how does content surface? We all remember this from the gaming craze in 2007 and 2008, when Facebook opened up its API. The number one complaint users had was spam. We all got FarmVille requests filling up our news feeds.
VB: Too many sheep flying back and forth.
Relan: You had all this irrelevant content, and the core strategy of Facebook in the early days was simply to let the community sort it out, until it reached a breaking point where there was so much spam from FarmVille on Facebook — I remember meeting Mark Zuckerberg in 2010, and at this point he literally said, “I hate this.” It had completely destroyed the network. It’s interesting, because games are actually one of the most important applications on any new platform — consider the iPhone — but there’s obviously a risk of letting that get out of control.
Fast forward to today, what’s the bad content today? The bad content 10 years ago was game spam. Today it’s fake news. It’s on 10 times the scale that existed 10 years ago. FarmVille had 200 million users. Facebook has 2 billion users now. So how do you stop fake news — which we all agree is bad content — from maybe even throwing an election? We have a huge problem there.
You would think that Facebook’s natural instincts would be to stop fake news with — well, what was I just saying? Let the community handle it. But in 2016 they acknowledged that they had built AI machinery into the system. They’re using that to identify fake news in combination with the community. Which is a huge admission and a huge point for Facebook, which so deeply believes that the user community will take care of bad content.
VB: Do you feel like they’re turning a battleship here, to focus on AI and try to clean out garbage content from the network?
Relan: Facebook is more like an aircraft carrier when it tries to turn. We saw that when they went into mobile. I think it’s more that it’s inevitable. Even if you’re 100 percent in control of your company, you can’t avoid the fact that here is this technology, and you would literally have to say, “I do not want to use it.” The community isn’t going away.
We have a similar problem at our company called Discord. It’s a voice chat community for gamers. It’s, what, 50 million users? That’s one-fortieth the size of Facebook. It’s become a hub for people who create small communities to chat about the games they’re playing. But we’re finding that groups are certainly coming up that aren’t about games. They’re about all kinds of other topics. So we use image recognition to, for example, enforce our policies about porn on these channels. Image recognition is great for knocking that out. Even in a community platform, then, you would use AI if you were a startup today to stop bad content.
VB: Do you still eventually have to get to human curation, like in our earlier talk about eBay?
Relan: I think it’s sort of like exception panels. You ultimately have to allow the AI engines to do their job. And then you have a sort of exception flow where the AI says, “I don’t know exactly what to make of this.” Then you have a small operation, like we do at Discord, that deals with exceptions and edge cases and so on. But imagine the scale issues if you went the other way.
You also need tools for your users. Anybody in a group can flag a chat and say, “I found this unacceptable,” or “I found this objectionable.” That will also do it. Whether the community speaks first or whether the AI speaks first, you have to have them working together.
VB: So there’s a human in the loop, still. That takes us to your new startup, Got It. Tell us more about Got It and the human aspect of it.
Relan: Got It is a new company that’s saying, “If Amazon can give you a new virtual machine server as a service, why not allow knowledge to be a service?” If you want to know something, you can go through Google and search for it. You can browse forums and communities. But neither of those is really a service, because a service has to follow four key criteria.
One, it has to have a defined unit. With Google, you don’t know how many links you’re getting, whether it’s four or 45. Two, it must have a set price. Three, it must be on demand. And four, it must be guaranteed. If you look at Google or Quora, you find that neither of those meet all four criteria. Communities and forums and Q&A sites, you don’t know if somebody will ever answer. There’s no guarantee, and it’s certainly not on demand.
Got It is creating a version of that just like Amazon, which sets up a machine for you on demand, and the machine is well-understood. The pricing is well-understood. This is the spec, the CPU, and this is the price you pay for it. We have this notion of a 10-minute chat session, on demand, with an expert for your question. You have a question, you ask it, and the expert shows up.
VB: And this is a human expert.
Relan: Exactly. You have your 10-minute chat session, you go back and forth, and when you’re done, you evaluate your question. It could be something pretty technical. I’m working on a pivot table in Excel and it’s not pivoting right. Let’s work on this together for 10 minutes. We do believe that humans need to be in the loop. But the interesting thing is, the way you find the expert is using AI. I don’t think you can substitute for the sharing experience we get when two human beings connect and work on something together, when they explain something to each other. That’s irreplaceable, frankly. I don’t think any sort of content interaction replaces a person-to-person contact.
The cool thing is that finding the expert, which is not actually human-interaction-dependent, can be done using AI. We use the same algorithms that Google uses, which is PageRank. Google has a new system now called RankBrain, which is the first time they’ve acknowledged the use of AI in addition to content as a way of finding the best pages for you. We use what we call ExpertRank, which is an AI that asks, “For this problem, of all the millions or billions of people out there, who is the best person, the expert?” As long as experts are registered, they get a notification that tells them, “Somebody would like help with this problem.”
We all know that for any question we have, among the seven billion people in the world, there is someone who is perfectly matched for our question. We know that, intuitively. The notion of combining humans with AI, whether it’s at Facebook or — at Google it’s actually very interesting, because the search engine runs completely on servers, and the AI engine they’ve added to the search system is also completely running on servers. It’s truly a server-based system. But only 15 percent of Google’s queries are answered with the help of AI. The other 85 percent are still using the traditional PageRank.
When you don’t have humans in the loop, I think AI-assisted engines are extremely welcome. AI-only engines will get there in time, like the true self-driving car. But if you look at Tesla and Google’s Waymo, the two strategies are different. Tesla’s strategy is AI-assisted. In other words, it won’t drive without you there. The human has to be in the loop. I would argue that Tesla is collecting more data than Google and Uber are today, because they got ahead with the AI assistant, as opposed to pure self-driving.
VB: So there’s more efficiency in the answering if you have this combination.
Relan: You have more data. People have said here before that AI is all about data. The more data, the better your AI. If it’s bad content, the more your community generates bad content, in a strange way the more your users will pick up on what is bad content. In a strange way, the more fake news, the easier it is to fight the fake news, because you can train your AI to recognize it. If fake news were like trying to find a needle in a haystack, it would actually be harder to train the AI.
It’s the same with Google searches. The more searches, the better Hummingbird is going to get at understanding the query. Hummingbird is the RankBrain algorithm that actually understands the query. The more queries you can train it on and understand those results, the better.
VB: The question I think of here is how you scale the human part on your end. I may be the top expert on a given subject, but I’m not going to answer a query at three in the morning.
Relan: The vision of Got It is very simple. Today we have a very large social network with 2 billion people. I would argue that most of the communication that goes on in that social network is, well, social. It doesn’t emerge from the perspective of, “Hey, I have a problem and I need some knowledge to solve it.”
So imagine a world where we have 7 billion people. Just take a guess. How many cars are out there in the world? One billion? How many homes are there in the world? One billion? Then we can have on-demand services connecting those supplies of homes and cars to the demand for homes and cars. The companies doing that are doing very well, Airbnb and Uber. So how many human brains are out there in the world? How many companies or systems in the world need to connect to the right brain in a knowledge network to solve a specific problem?
There’s somebody out there who very specifically can give you the knowledge you need. But there isn’t yet a system for that. So the idea is, build a knowledge network as big as Facebook, but it’s not social now. The interesting thing about Got It is, if you own a home, there’s a mortgage cost. If you own a car, there’s a lease cost. But 10 minutes, in that 10-minute chat session—the knowledge you carry in your brain, the things that you know about, it costs you nothing to carry it in your brain, as far as I know. We have the world’s most underutilized resource, and it’s free. All that doesn’t exist is the knowledge network the size of Facebook to connect it.
The vision, then, is to get everybody to be an expert at something or another on the system, and build an AI engine that finds the right person for a problem. We’ve delivered more than 3 million sessions now. We have 12,500 ranked experts in the network. Two hundred more join every day. We have more than a quarter million applied. We have people like software engineers taking questions about Excel. They’re having lunch, this thing pops up, and it looks interesting.
The marginal cost of this inventory of brains is zero. All we need is the AI, because the humans exist. We have no shortage of humans. We don’t want to replace them. We want to find them.
VB: Where are you with this now? What’s your road map for where you need to go?
Relan: Today, as I say, we’ve delivered about 3 million sessions. Now we have data. One of the most interesting things is, we have 3 million chat sessions in our database between a client and an expert over some problem or another. Now we get to the point of, say, “How can we mine that data for our machine learning algorithms? How do we look at that data from the point of view of the expert?”
Our AI algorithm looks at every single session and adjusts the expert’s rank based on six factors. The first factor is politeness. We have processing that says, “Did the expert talk to the user politely?” This is a utility. The user pays for this. Second is empathy. Does the user feel — do they say something like, “Yes, I feel like you understand my problem”? Those are signals, in the chat session, that the user is feeling empathy from the expert. Third, of course, is accuracy. Did they answer the question? Did their Excel pivot table end up working? And fourth is personal information. Did they try to exchange personal information?
If you look at a 10-minute chat session and the richness of the human conversational content, it’s very large. You have politeness, empathy, accuracy, customer service at the end. Hey, are we done? Are you satisfied? All of those go into it, and as a result the expert’s rank will adjust. Right away, at the end of the session, they’ll be told, “Hey, that was a great session. Here’s your new rank.”
Our road map is very clear. We will never replace human beings, but we will always be looking to AI for content, to weed out bad content, to provide good service, to promote empathy and understanding. When you start to describe these attributes in a chat session, it sounds pretty human, doesn’t it? We’re asking our AI engine to provide no bad content, the same as Facebook. We want it to find a relevant expert, the right person, same as Google — like relevant search results. And then this last thing is the human quality of the session. Which is delivered by a human, but we look at it and say, “Was it delivered in a way that made it a great one-on-one chat session?”
VB: This is a service that can still get better over time, then.
Relan: It will only get better over time, because the data keeps improving. We’ll find more bad content. We’ll get more relevant experts for people’s problems. We’ll obviously have better results within the session, as long as we keep adding the data to train our AI.
Amazon, by the way, is very interesting. This is my challenge to Amazon, because it keeps building a black box. We’re all developers here, or many of us. We look for a service on demand. Say we need more compute power. We now know there’s an enormous variety of compute power you could get out there. So how do you know whether you’re getting the best resources for your problem? If it’s a large data analytics problem it’s probably a different set of resources than if you’re a transaction processing app or a machine learning app.
What is the relevance of an on-demand service? I think that’s an important aspect of Amazon’s future, even though most people are obsessing about Alexa and Echo. As a platform, I would want to make sure that when a request comes in on demand, I find the best resource for it. The sheer number of AI applications right now is exploding. How do you find the best resource for your particular application? Maybe they do and maybe they don’t, but we don’t really know. We don’t have the ability to manage that response.