Spirit AI is using artificial intelligence to combat toxic behavior in game communities. The London company has created its Ally social intelligence tool to decipher online conversations and monitor whether cyberbullying is taking place.

It is the brainchild of researchers at New York University, according to Mitu Khandaker, creative partnerships director at Spirit AI and an assistant arts professor at the NYU game center. The company uses AI, natural language understanding, and machine learning to help data science and customer service teams to understand the general tenor of an online community. It also helps predict problems before they escalate.

Ally considers context, nuance and the relationships between users versus seeking and blocking keywords. The software uses natural language understanding and AI to identify the intent of a message, and then analyzes the behavior and reactions to determine its impact. It can do things like identify cyberbullying.

Spirit AI recently introduced a host of new features, some of which were unveiled at this year’s Game Developers Conference, and as part of the new Fair Play Alliance Summit. Product improvements include new Smart Filters, an updated and customizable web-based front end and intuitive node-based interface, and GDPR (the new European privacy law) compliance. Another product, Character Engine, is a natural language AI framework for building autonomous characters with agency, history, and knowledge.

“We are at a moment where these things are happening and AI can help,” Khandaker said.

Khandaker recently spoke about Ally at the Casual Connect Europe event in London. I spoke with her afterward. Here’s an edited transcript of our interview.

Here’s an edited transcript of our interview.

Above: Mitu Khandaker at Casual Connect Europe in London.

Image Credit: Dean Takahashi

GamesBeat: Tell us about what your company does with Ally.

Mitu Khandaker: Ally is really a tool for analyzing the social landscape of communities, really understanding them in terms of language and context. All of the things about conversation that I mentioned also apply when you’re looking at conversations between players and non-player characters (NPCs), right? But this time looking at those things between players. Conversation is very nuanced, very contextually specific.

As a company, the idea for both products came about at the same time. Initially we were talking about, “What is possible when you really understand language, when you really understand behavior in context?” Obviously there’s the dream of being able to naturally talk to characters. We’re helping realize that. But also, for me personally–we started Spirit almost three years ago. This was a year into Gamergate, when myself and a lot of my colleagues were targets. I thought that if we really understand nuanced conversation, online harassment is a key area to try to tackle.

You can’t just ascertain harassment from keywords. Language changes. Bad language might be fine between friends, but then something that is completely innocuous-sounding that a stranger suddenly starts—someone might invent a new curse word, right? If your system doesn’t know about it, or it’s deliberately misspelled, all of these things. People try to circumvent these systems as much as possible if they’re trying to be malicious.

The system is good at understanding language in context. That’s done several ways. One of them might be just semantically trying to understand what the word means, from how it’s being used. Another might be—this is a key one, which is trying to understand whether the language is consensual. Going back to the example, let’s say my best friend decides to call me a name. I’m fine with that. She can call me whatever she wants to. That’s a consensual relationship there. I know she doesn’t mean it unkindly. I’m not going to say, “Don’t do that.”

But if that same word gets used by a stranger — some random dude online starts hurling the same insult – I might either go silent, not respond, which shows that maybe I’m not having a two-way conversation with this person, or I might say, “Go away, leave me alone,” expressing some kind of lack of consent verbally. Or I might just log off.

It’s about trying to make communities safer for people. Understanding what it is that specifically has hurt them. What is it that they’re not consenting to? Basically, by using these tools you can curtail those types of interactions and just keep people feeling safe and coming back. That’s how it makes life easier for a player.

On the community management side, the community moderator side, one of our big aspirations is, how do we use things like AI to reduce emotional labor? This is getting into some of my bigger philosophies around AI. There’s a lot of talk about AI and whether automation will replace jobs and things like that. That’s the big topic of conversation now. Wherever you land on that topic, one thing that we can use AI to do is reduce the emotional work that people have to do.

When you look at really tough jobs, like community moderation—moderators, as you know, have to look at so many terrible comments. Over time that stuff gets to you. There have been reports of people in those situations getting treated for PTSD because of the content they’re being exposed constantly, day in and day out. One thing AI can help us with is reducing that emotional labor, automating away some of the shitty stuff they have to look at.

GamesBeat: People always think of AI doing boring work, like truck driving, but–

Khandaker: Right, but what about AI doing the emotional work? I think that’s really interesting, because what that leads us with is a healthier work force.

GamesBeat: The part about “the cake is a lie” in your talk made me curious. Was it from an actual Eliza conversation?

Khandaker: Oh, that was an actual conversation in Eliza, yeah. One of the things Eliza does, on the particular computer it’s on it will restore things the user has previously said and repeat it back. Like I said, it’s that model of therapy where the therapist says, “That’s interesting, tell me more about the cake.” That’s how it works.

Above: Spirit AI’s Ally

Image Credit: Spirit AI

GamesBeat: When I was an English major, we dealt with different theories like structuralism. You have a story arc. Is that an example of something AI could replicate? Could it create its own narratives, the emotional arc of a story?

Khandaker: Absolutely. There are several ways you can look at it. If you look at how the story unfolds between you and a character you’re talking to, there can be an arc that you architect. You might say, after a certain amount of time, or this many conversational moves, you want to ramp up the drama. You want to make the character suddenly angry. What are they angry about? There’s a particular thing that they want to bring up with you, that they’ve been meaning to talk to you about.

That’s one way of doing it, actually—when you’re making the story unfold through conversation, that’s one way you can control that arc. You can say, “This is the particular story I’ve written. Here are the ways it can unfold.” The character will say certain things when it’s the right time, because you’ve finally built their trust up or whatever. That’s another thing we can do through our system.

GamesBeat: You can take a literary theory and say, “Here’s the best kind of story.” You can feed that into the AI and replicate a pattern.

Khandaker: Some of it is a bit more authored than that, with our system at least, right now. You’ll say, “Here is a scenario about trying to console a friend who’s gone through a breakup.” You can’t progress in the story until you’ve calmed her down, let’s say. It’s not predetermined as far as what you say, what the interactions are. You just know you have to say things to calm her down, then she’ll show you the way to the next part of the area you need to go to. It enables a whole new type of interaction in games and characters. Let’s actually think about what the social relationship is like. What do you say? What do you do? Things like that.

To answer your question about generating stories, there’s a lot of interesting academic work being done around—if you take a set story structure, like you said, can you generate pieces of that? There’s a lot of cool research being done in that realm. I think there’s so much interesting AI research going on today in games academia, and often the industry just isn’t looking at it. One thing we’re trying to do as a company is build tools that bridge those two worlds. We have a lot of people with academic backgrounds in interactive fiction and games narrative. We’re building tools that help people in the wider industry, developers, use some of this tech.

Dolores is an AI character in Westworld who achieves consciousness.

Above: Dolores is an AI character in Westworld who achieves consciousness.

Image Credit: HBO

GamesBeat: I watched Westworld, and I keep wondering about the notion of treating game characters as humans or as objects. If you think they’re human, then you interact with them in a very different way. What do you think about where this is going to go?

Khandaker: This is something I feel very strongly about. One of the biggest problems in games is we’ve put a lot of effort into the visual fidelity of characters, but there’s no—they’re one-dimensional. They’re just there as objects, as you say. You need to get a certain thing out of them. You have this very transactional relationship with them. Or they’re just there because you need to kill them to progress.

We need to have the social fidelity of these characters come in line with the visual fidelity we’ve gotten. How do we create games in which players do have to treat characters as human? Not that they’re being fooled into thinking they’re human. That was part of my talk. I think that we still—it’s like when you read a book or watch a movie. You relate to the characters in those fictions as human, right? You know they’re not real, but you still feel like they’re fleshed-out human characters. That’s what we want to get to with games.

I’m not saying games aren’t at the level of books and movies, because those are all very different things. But I think that in terms of conversation, we’re not quite there yet.

GamesBeat: There’s a lot there as far as what a person perceives, as opposed to what’s there or not there. If you get perfect AI multiplayer, then you can introduce this into a game, and I no longer believe I’m playing against a human. If I stop believing I’m playing against another person, it becomes less interesting.

Khandaker: Well, I would actually say that even if you knew you were playing against a simulated character, if the character was still behaving in interesting ways that you can’t guess—I think it’s still interesting. You don’t know where the edges of the system are.

That’s what interesting about interacting with people. You can’t always predict them. They’re revealing themselves to us as we interact with them. It’s the same with game characters, I think. We can know, on some level, that a character is simulated or fictional, but still care about them. And with certain types of players, we have a predisposition to doing that more than others.

Going back to Eliza, have you heard the phrase, “The Eliza effect”? It refers to the tendency that we have to prescribe more sentience to things than they actually have. It’s called that because it refers to the way that people have thought of Eliza as being a real therapist when it was just a chat bot repeating things back to them. But I think certain personality types and certain play styles—you just care about characters more than others. You can have the same triple-A big-budget narrative game and some players will just say, “I really care about this character,” while someone else says, “They’re just a token I need to interact with to get to the next part.”

But one thing we can do through good design is bring that up so it’s equal. By providing characters who are interesting to interact with, you really have to try to have a meaningful conversation with, we can make more people care about characters.

Detroit: Become Human starts with Connor the police negotiator.

Above: Detroit: Become Human starts with Connor the police negotiator.

Image Credit: Sony

GamesBeat: Something like Detroit: Become Human, did you have any reaction to that, or the general trend in stories about AI and the fear reaction to it?

Khandaker: I haven’t yet played Detroit. I need to get to that this weekend. But I know a lot about it thematically and so on. There’s obviously some sort of problematic stuff in there about equating AI to existing racial struggles in the world. There’s a lot there to unpack in general.

I worry less about what happens when machines get sentience and so on, because I think we’re pretty far out from that. There are some more pressing issues to do with AI that we need to think about now, like algorithmic bias. Are we training our machine learning data sets that we use to recognize language—are we recognizing all types of speech styles of people from all kinds of backgrounds? Things like that are way more interesting.

GamesBeat: Like that Microsoft chat bot.

Khandaker: Exactly. There are way more pressing issues to do with the ethics of AI than what happens when they gain sentience, whenever that happens.

Above: Spirit AI’s character engine

Image Credit: Dean Takahashi

GamesBeat: The detecting harassment element, how valuable do you think that’s going to become?

Khandaker: We’re working with a couple of partners that we can’t name just yet, but hopefully soon we can name one of them. Predominantly in the game space, because again, games—I think games are an environment where people do come to it with this playful, “oh, I can say what I want” attitude. There’s really nuanced language at play there. That’s why games has been a good place for us to start out developing this tech. It recognizes, is something a positive interaction or a negative interaction? But we’re also applying it to other industries as well.

GamesBeat: It seems like such a massive scale on the internet. You need something automated to respond to it all.

Khandaker: The first step is obviously recognizing when language is negative, or in a certain category of harassment. I think it’s up to the platform holder who’s implemented our tool to decide, how do I respond? They can do things like—if it’s sexual harassment, they immediately mute the user, for example. Or whatever it is that makes sense for your system. We’re not prescriptive about what moderators do as a result. We just provide lots of options.

GamesBeat: The Gamergate situation, where certain people get hit with so much harassment, it’s not particular insults so much as there are thousands of them at once. It gets overwhelming.

Khandaker: There’s a lot of talk, obviously, about how at the time—or with any of these ongoing harassment campaigns. Things like Twitter not providing enough tools to be able to mute accounts, having to block things one by one. If there was some level of automation, where a system is able to automatically recognize — okay, this is unsolicited, non-consensual harassment – and stop the user being able to see that, or flag up the account to some kind of moderator, then that’s something we need.

Like you say, when it happens at that volume, it doesn’t get deal with otherwise. If you report someone, it takes ages before a human moderator gets around to doing anything. That’s frustrating for both the user and the moderator. It’s about how you can help the user, but also help the moderator. In our interface, there’s a dashboard we provide for moderators where they get to see different categories.

This is an example of the Ally dashboard. There’s no data in this right now, but what we’re showing at the top here are user happiness levels. This is a nice little visualization for the moderator to see how happy and healthy the community happens to be. Are there lots of instances of harassment going on, negative things going on? Some themes we could look for—we can look for bots. We can look for flirt greetings, when people are saying something like, “Hey, sexy, what’s going on?” and you don’t necessarily respond to that. Negative game sentiments, if people are badmouthing your game in same way. Hate speech, scams. But we also track positive things as well. Are people saying good things about a particular feature? That helps you look for that.

Here, this is reporting. A lot of people fake-report each other, as a tool of harassment. Reporting somebody just to get to their account banned. We look for threats like that – “Oh, I’ll report you” – and that kind of thing. People are clever, right? They try to be terrible and also sometimes great to each other in really interesting, intelligent ways. It’s about a system that can keep up with that.

GamesBeat: If AI eliminates a bunch of jobs, there’s this question of what we do afterward, what jobs get created. What interests me is the growing number of people who get paid to play games – streamers, influencers, esports athletes, cosplayers.

Khandaker: It’s that very Iain M. Banks type of thing, getting toward this playful society, enabling people to just be their playful, creative selves, and all the work is outsourced to machines.

GamesBeat: It’s a happier scenario as far as what AI helps us achieve.

Khandaker: The thing is, in many ways that’s good, but this is where it goes beyond tech. It gets into policy. It’s no good AI automating people’s jobs if the social and political structures don’t exist to support people doing other things.

GamesBeat: That’s pretty far out. It seems like you’re targeting more of the near term.

Khandaker: Personally I have a broad interest in how AI helps us with both work and play, in the near term and the longer term. All of these questions are super interesting. It’s important to think about that stuff now — what are the repercussions of the decisions we make in AI now to these longer-term things?

Above: Mitu Khandaker of NYU at Casual Connect Europe in London.

Image Credit: Dean Takahashi

GamesBeat: Is there any kind of AI character you’ve seen that you admire in some way?

Khandaker: When you talk about “AI characters” that can be categorized in all kinds of ways. Obviously our focus is on conversational AI characters, really helping developers create those.

GamesBeat: You have these grind conversations in games, where it’s not going anywhere. You’re just chatting in circles. But is there a story that the conversation can tell?

Khandaker: You’ve just got to the crux of the issue. Your problem isn’t whether the character is real or not. It’s whether you’re having an interesting conversation, whether there’s a point to the conversation. If we’re able to design conversation systems that keep conversations compelling, that answers that. It doesn’t matter whether a player thinks a character is real or not. In fact, it’s better if they don’t. It’s still fiction, just like we get people to care about fictional characters in books and films. But the reason books and films have compelling characters is because they’re written in a compelling way. We need to bring that same ability to naturalistic automated conversation in games.

You don’t need to believe that they’re real. You just need to believe that talking to them has a purpose. That can apply to talking to real people. [laughs] “Is there really a point to this conversation?”