Above: Westworld VR experience at Techcrunch Disrupt a few years ago.

Image Credit: Dean Takahashi

GamesBeat: I guess if they’re teaching us a lesson in some of these things, teaching us about morality or the choices we make, then it seems like that’s a good intent.

Bartle: It is, so long as it’s not just a cover. So long as I’m not just saying, “We just did this to test your morality.” No, you didn’t. There’s another case in World of Warcraft, where — sometimes it’s okay to show you a bad boundary — to break a boundary in order to show you where the boundaries lie. They tempt you to do something, and you do it, and then you get punished for it afterward. Then you get told, “This is where the boundary really lies.”

That sort of thing is okay, and that does happen a couple of times. There’s an early quest, or there was in Classic, where some kind of demon thing tried to get you to do something you knew you really shouldn’t. Then you do it and get caught and get told off, so now you know where the boundaries are. There was a later one in Wrath of the Lich King where you were asked to kill a whole lot of Alliance people based on the memories of Arthas, the Lich King. In order to find out just how bad the Lich King is, you step into his shoes for a while. That’s okay, because again, I’m protected.

But there was another quest where you had to torture somebody. You had a pain stick, and you had to go and hit the person with the pain stick until they told you where somebody was. Basically you were torturing people. I didn’t like that, because when I signed up for World of Warcraft, I wasn’t expecting to be asked to torture people as part of a quest. If I had an option — torture or not — and I decided not to and there were consequences, fair enough. If I decided to torture and the consequences were worse, because as is often the case with torture you get the wrong information. Again, that would be telling people that torture isn’t good here. If you do torture someone the consequences are worse.

VB Transform 2020 Online - July 15-17. Join leading AI executives: Register for the free livestream.

But that isn’t what happened. It was just a case of, let’s see, what’s the next step? Go and hit that person a couple of times and then learn where the archmage is hidden and go arrest him. Those kinds of boundaries there are — you can see how it happened. They were writing a thousand quests and looking for different ways to use the limited number of tools in the box, and that’s just what happened. Someone made a mistake. It wasn’t done deliberately. At least I hope not.

Emergent behavior and consequences

GamesBeat: I had some similar experiences in Red Dead 2. I shot a dog by accident, and the sheriff came after me and wanted me out of town. I didn’t go out fast enough, and so he started shooting at me, so I fired back and killed the sheriff. Then a whole posse came after me and killed me. I learned the lesson. You shouldn’t shoot dogs in this game.

Bartle: [laughs] That’s quite often the case. The guards in the town are completely incapable of staving off the bandits raiding the town, but if you shoot one of their chickens, suddenly they’re impervious to all pain and they’ll arrest you no matter what you do. They’re superhuman. Anyway. Those aren’t really AI issues, but they are morality issues.

Dutch Van der Linde (left) treats Arthur Morgan like a son.

Above: Red Dead Redemption 2 had a very realistic game world.

Image Credit: Rockstar

GamesBeat: One thing I’ve said before to game developers — it’s this quote from the Kurt Vonnegut novel, Mother Night. He says that for once, he knows the moral of his story. The book is about an American spy in World War II who does too good a job at his cover job of being a Nazi propagandist, and he’s ultimately hung for it. The moral of the story is, we are what we pretend to be, so we must be careful about what we pretend to be.

I always thought it was interesting to think that you can tell yourself that you’re just playing a game, playing a role, but if there are consequences to that, then you’re sort of kidding yourself.

Bartle: Yes, you’re lying to yourself. There are consequences to playing games. That’s the whole reason people play MMOs. It gives them a freedom to try on a new identity, a new version of themselves that’s like themselves, but not quite the same. It enables them to experiment with how to behave in a new scenario. Most people will come out of playing them with a better sense of self than others. The trouble is that if you’re a jerk, you’re going to come out as a better jerk than you were when you went in.

There is something called the Proteus effect, which suggests that when people are role-playing, whether in a game or in real life, it can change their real opinions.

What this shows is that when people play a character, they can be influenced by the character that they’re playing. Even things like — in ice hockey, one of the American sports, people who are wearing black uniforms are fouled more often than people who aren’t wearing black uniforms, and commit more fouls. The black uniform is saying, “We’re the bad guys.” When people are wearing some kind of a skin, they’re influenced by the skin they wear.

If you’re playing an MMO, well, you’re sort of being influenced by the character you’ve chosen to play. Now, in part that’s because you chose to play that character. You’ve chosen one that would somehow give you a return. But nevertheless, it does mean that there are some possible implications. If someone buys a game and there’s only one character and the character is not a very pleasant one, then people could be unwillingly having their opinions subtly altered by playing that character.

Should we hold ourselves back?

Pete Billington and Jessica Yaffa Shamash talk about Lucy at the Virtual Beings Summit.

Above: Pete Billington and Jessica Yaffa Shamash talk about Lucy at the Virtual Beings Summit.

Image Credit: Dean Takahashi

GamesBeat: Should game developers, and technologists in general, hold back from creating sapient AI? Does that carry some great risks with it? I don’t know if that’s the risk that they might harm us, or the risk that they can be abused, depending on the context. If we can foresee that this AI is going to get better and better, should we not do this?

Bartle: What I would say is, there’s a larger question. That is, is it actually moral or ethical to create an intelligent being in the first place? Never mind what the dangers are, because if they’re intelligent, then they’ll probably develop their own sense of morality and it will probably be in line with ours, because every time any culture in the world has had to develop a morality, they basically come down to the same set of core rules, humanist types of rules. The larger question is, should we create intelligent life anyway? Ignoring anything that it could possibly do to hurt us, assuming that isn’t going to hurt us, is ethical to create life?

We’re going to create this intelligence and we’re going to set it in an environment and it’s going to be suffering, because that’s what happens in environments. Alternatively it’s not going to be suffering, in which case it’s going to be bored. Eventually it’s going to die, or it’s going to change from what it originally was. That’s the first question we should be asking. Is it ethical to create life? Now, we create life through reproduction. But that’s not quite the same as creating an independent life, a separate life form.

As to whether we could put limits on it with some kind of Geneva Convention for AI, at the moment the technology is advancing very quickly. But the computer technology upon which it runs isn’t. The energy requirements for some of these neural networks being put out there are vast. Training these things, getting these genetic algorithms to fight each other forever, these things take a lot of computing power, and that means a lot of energy. It’s not as if everyone can do it.

Eventually, when we have unlimited energy, they will. But what happens up until that point? We’ve managed to keep the number of nuclear powers down quite low. It’s still fewer than a dozen. But we can’t stop nuclear power. If somebody wants to make it, they will. Likewise, if we created a treaty that says no superintelligent evil AIs will be created, that means some mad dictator will think, “Great, I’ll have the only one!”

In practice, all we can do is delay and slow down. But by delaying and slowing down, then we can develop other ways to think about things. We can develop moral codes. We can decide what to do when this happens. I am in favor of slowing it down. We’re not close yet to having sapient AI. Frankly, the world’s in more danger from some badly written code at a nuclear power station going haywire than it is from AI. There’s plenty of other ways that computers can destroy the world.

GamesBeat: We have some time to think about it, then.

Bartle: I have time to die before it happens, I think. [laughs]

GamesBeat: Well, they might bring you back, though. Upload a version of you to the cloud and keep it there.

Bartle: They could upload a version of me and then bring back multiple copies, so there’s a dozen of me in the world, or 200, or an army of me.

GamesBeat: Then they can see how each one behaves.

Bartle: Well, then it wouldn’t be me, would it?