Morally considerable beings
GamesBeat: In your talk, and in the book as well, what sorts of things came to mind when you were asking this question of whether the sapient NPCs were morally considerable, and whether a code of ethics should apply to how we behave toward them?
Bartle: The first thing is, if something itself has morals, then you can be pretty sure it’s morally considerable. If we create a virtual world and the NPCs develop their own set of morals — they won’t harm people for no good reason, they’ll be kind to animals, all the things we would consider moral — then if those are things they’ve developed as a code of behavior, then it shows they have morals. In that case we should treat them as moral beings.
Then, if we do treat them as moral beings, they have to appear somewhere in our hierarchy of moral beings. Do we consider them the same as us, because they’re as smart as us or smarter? Do we consider them the same as animals in our world? If the trolley problem for here was save the dog or save the virtual equivalent of Mother Teresa, what do we do? Do we save the real living dog or the, as far as she’s concerned, real live living saint?
When we make these decisions — we’ll save the dog over the saint — then suddenly you have to think, “Just a moment. We’re NPCs. That means any god of our reality who has a dog can switch off our reality and kill us all because their dog was in danger.” That doesn’t sound right, does it? We would probably put up quite a stiff argument against our world being destroyed to save a higher entity’s dog. Likewise, the people in the virtual world would. They’d also argue that they’re on par with us.
GamesBeat: Do they also have to have some measure of free will? Or is the fact that they have morals enough?
Bartle: I would say that they’re not going to develop morals unless they have free will. If you just coded in the morals, then they don’t have morals. It’s just a behavior. Morals are where you have the ability to decide between doing the right thing and not doing the right [thing]. Some of them won’t do the right thing. If everybody does the right thing, that’s part of the physics of the world.
Creatures with free will
GamesBeat: So you’re talking more about emergent behavior than hard-coded behavior for these NPCs. They can’t trick us with hard-coded responses.
Bartle: No, no. I’m assuming that these creatures have free will, or at least from their perspective have free will. They are as smart as us, rather than Neanderthal-smart or children-smart. They’re as smart as adult humans or smarter.
If it’s emergent, then, well, actually the nature of their intelligence is another one of the topics. It could be emergent, emerging from the interactions you create in the world, or it could be that we’ve created it and we put it on board with each little NPC. Each of them is a separate entity on a machine that controls the NPCs, but they don’t know they’re on a separate machine. Or we could have one huge AI that controls all the characters. They all think independently, but they’re all connected because we have a planet-sized computer somewhere doing the thinking for all of them.
The thing is, if we did have that, then there would be some things that would be so easy to implement, like telepathy, that you can argue that because we don’t have telepathy in reality, our minds in reality are not controlled by an external computer. You can do some arguments like that. But however you implement the intelligence, eventually we will have AIs that are as intelligent as us.
Where is the line
GamesBeat: So if physical form doesn’t matter–this gets to be very tricky when you try to draw this line. This is human, this is intelligent, and this is not.
Bartle: If you’re saying we’re only interested in humans, not interested in a computer that’s sapient, even if it’s much smarter than us — we’re not interested in it because it’s not human? Well, that’s fair enough, but then you have to ask the question. That means that if there is a god or gods of our reality, they can treat us with exactly that same disrespect. They can say, “You aren’t Asgardians,” or whatever particular theology you believe in. “We can do what we like to you. You’re just bits in a database.” From their perspective, we are just bits in a database.
Whatever we decide we’re going to do to our NPCs, we can say, “Well, then a higher reality, if there is one, could treat us the same way. Maybe we’d better be nice to our reality, because we don’t want people higher than us treating us how we treat the people below us.” That was one of the things I was hoping to get people to think about.
GamesBeat: There’s a certain amount of fiction writers who believe that if we create these things, they’ll take over from us. They’ll eventually be smart enough to outwit us and eventually become the dominant species.
Bartle: That’s if they don’t fall out amongst themselves. Once you give things free will, they have free will. Some of them will want to take out the other ones, because that’s what people with free will sometimes do. In terms of virtual worlds, it doesn’t matter how smart an NPC is. It can’t affect our world unless we let it. If they have some kind of amazing powers of persuasion, I guess they could persuade a player to go do something for them. But they have no physical ability to do anything unless we give them that ability.
We could, for example, build a robot in our world, a human-looking robot, and we could give control of that robot to an NPC in a virtual world. Suddenly the NPC is able to perceive our world and act in our world.
They would be able to do things, and if they were super geniuses, they could go off and affect our world in ways that we perhaps would wish they wouldn’t. But unless we actually give them access to our world, they have no more access to our world than we have to Mount Olympus or heaven or anything else you want to call it.
How to behave in a virtual world with AI
GamesBeat: Unless we actually create Westworld.
Bartle: Well, if you create Westworld, that’s part of our reality. If we create smart drones that can decide who and who not to kill, that’s within our reality. That’s the “robots are going to rise up and kill us all” argument. But what I was looking at here wasn’t robots in our reality. It was about how we as the gods of a reality should treat the NPCs within that reality. They’re super-smart, but they’re contained. They can’t get out. What obligations do we have to them, if any?
GamesBeat: I used to have a moral code in the games I chose to play. I felt like it was okay for me to be the good guy, not to be the bad guy. Something like Grand Theft Auto gave me a great deal of angst. If I had to go shooting police officers in the game, then I felt like — this, to me, is something other than fun. This is not enjoyable.
And yet by the time Grand Theft Auto IV and V came out, I thought about this more, and I felt like, “Well, these things seem artistic. They’re good games, among the best games anybody’s making. The characters are very interesting. Maybe I’m wrong about this notion. Maybe the only thing I need to remember is the difference between fantasy and reality.” If Grand Theft Auto V is fantasy, I can behave like Trevor, the really bad character, and just do the things I think he would do, because I’m playing his character. That might be okay behavior for a game player. I don’t know. You get to then fully appreciate the artistic creation.
Bartle: You do, but the thing is, an artistic creation is trying to say something. If it’s not trying to say something, it’s not an artistic creation. Artists create works of art in order to say something to the people who consume that work of art. If they could say it in a different way, they would, particularly in the case of games, because they’re so expensive to make. If they’re attempting to say something meaningful, and the only way you can find out is by playing the game — by playing the game you pick up on what they’re saying — then that’s the art of game design.
If what they’re trying to say is, “You can be who you want to be, but do you want to be who you are?” then that’s a kind of question that’s an interesting question. By providing you with a number of characters of various levels of morality, they could be inviting you test your own morality, and for you to come up with your own excuses as to why the things you’re being asked to do — which you don’t want to do — are legitimate.
But what I would say is if they’re going to do that sort of thing, then you need to know before you start playing the game. That’s the kind of thing where they can’t spring it on you. And they don’t with Grand Theft Auto. Everybody knows that it’s that kind of game. But if you were playing a regular, ordinary — I don’t know, a Japanese RPG or something like that — if you’re trying to defeat the enemies, and then suddenly you find yourself having to maybe rape somebody, well, hold on just a moment. When I bought this game I didn’t know that was going to happen. I knew I was going to have to kill bad guys and ride chickens, but I didn’t know that I was going to be asked to do that. That’s not right.
You need to know in advance, roughly, where the boundaries are in a game, if you’re to accept it. If they say, in the beginning, that this game has no boundaries, it’s very dark, anything could happen, well, perhaps it will happen. I remember being very annoyed with one of the Elder Scrolls games, when I got bitten by a vampire. You don’t find out until it’s too late to do anything about it, and then I had to go around sucking blood from beggars to stay alive. I thought I should have been warned about that. I know some people like the vampires, but I didn’t want to have to go out and suck blood. That wasn’t something I was told I might have to do — to be, as my character, that kind of thing.