This article is part of a VB special issue. Read the full series here: Power in AI.

Richard Bartle is one of the leading academics on video games and is a senior lecturer and honorary professor of computer game design at the University of Essex in the United Kingdom. He might seem an unusual choice to talk about the ethics of artificial intelligence, but video game developers have grappled with the ethics of creating virtual worlds with AI beings in them for a long time. Not only do they have to consider the ethics of what they create in their own worlds, the game designers also have to consider how much control to grant players over the AI characters who inhabit the worlds. If game developers are the gods, then players can be the demi-gods.

He recently spoke about this topic in a fascinating talk in August on the IEEE Conference on Games in London. I interviewed him about our own interests in the intersection of AI, games, and ethics. He is in the midst of writing a book about the ethics of AI in games. His aim is to point out the unusual moral and ethical questions that AI specialists of the future will face.

I asked if sentient AI was on the horizon. He corrected me, noting “sapient AI” is the right description, as it refers to AI that are conscious, self-aware, and able to think. Before we create sapient non-player characters in games, Bartle believes we need an ethical system in place. And he’s not so sure that we should create them in the first place. Bartle believes that game developers are like gods of the worlds they create. “Those who control the physics of a reality are the gods of that reality,” he said.

Below is an edited transcript of our interview.

Above: This may be Richard Bartle on Skype. Or maybe his virtual character.

Image Credit: Dean Takahashi

GamesBeat: It seems like a fascinating topic, and a very timely one. It feels more relevant to today’s headlines than ever before, I would guess.

Bartle: A lot of the general AI, ethics of AI, we’ve thought about for years. I did my PhD on AI in the 1980s. Some of the things that people talk about with AI, we were talking about back then — only hypothetically, but nevertheless we were considering these things. But some of the things I talk about in the deck are to do with games and AI in a way that we weren’t looking at it in the past.

Normally, when you look at AI and games, it’s using AI as weapons, using AI as ways to control a population, or using AI to increase your own intelligence. What happens when AI gets sentience and they want to kill us all? This kind of thing. The Terminator. The clue’s in the name. [laughs] But what I was looking at was different.

Let’s suppose that we have these AIs, but they’re in a pocket environment and they can’t get out. They can’t do anything to us except through us. How should we treat them? What’s right and what’s wrong? It turns out that when you look into the philosophy of this, well, the philosophers haven’t. They haven’t really looked at what it means to be someone in control of an entire reality in which intelligent beings live.

Theologists have, sort of, but they’ve only looked at our reality. They haven’t looked at a sub-reality in which we are the gods. They’ve looked at our reality in which they are proposing there are zero to infinity gods above. It’s a different area. But the thing is that people who’ve made these games have actual practical experience of what it means to be in control of them.

Now, we don’t have sentient–well, sapient is the correct word. We don’t have sapient non-player characters at the moment. The question I was asking is, we don’t know when we’re going to get them. It could be in 10 years or 1,000 years or a million years. But eventually, we will get them. And when we do get them, how are we going to treat them? What’s right and what’s wrong? That’s what I was asking.

The developers are gods

Above: The Terminator

Image Credit: Orion Pictures

GamesBeat: It seems like a lot matters in terms of what we call them. I know it’s at the top of your slide deck. You can refer to players as gods, and then if we’ve decided to call ourselves gods, then everything we do is justifiable, right?

Bartle: Well, players aren’t gods. Designers and developers are gods. They control the reality, the physics of the world. The players can go in there and have — I suppose you could say they have godlike powers, but they can’t change the physics of the world. They have abilities beyond those of the non-player characters. For example, they can communicate with each other without NPCs being aware it’s happening.

GamesBeat: If we call ourselves that, we’ve already made a kind of ethical judgment, right?

Bartle: Going back to Dungeons and Dragons terminology — gods, demi-gods, and heroes — the demi-gods are probably the customer service people. They have powers beyond the regular mortals, what the NPCS have. But they don’t have physics-changing powers. The players would be the heroes. They’re going in there and they’re bigger, better, superior to the NPCs. But they’re not gods. They don’t have the full range of abilities that the customer service reps have. Customer service reps are probably the angels.

Westworld showed us the way?

Above: Anthony Hopkins and Jeffrey Wright in Westworld.

Image Credit: HBO

GamesBeat: What was some of the reference material, if the philosophers didn’t really tackle this? Does something like Westworld do a better job? [laughs]

Bartle: Oddly, when Westworld came out, I’d already thought about these things. The Westworld TV series thought in quite a lot more depth than the original movie, back in the ’70s. But the source materials — essentially it’s metaphysics. I read a whole bunch of things on metaphysics and meta-metaphysics. There’s even a book called Meta-Metaphysics, the metaphysics of metaphysics. I was looking for some of the problems that philosophers have about how the world is built and then saying, “Well, we don’t have that problem because we’ve had to do it.”

This isn’t strictly to do with AI and ethics — but for example, philosophers have this problem to do with whether an object can share the same space as another object. If I take a lump of clay and I give it a name — everybody knows this particular lump of clay. It’s a particular color. Everybody knows it. Then I mold that clay into a statue or something. Somebody comes along who didn’t know it as clay, but sees it as a statue. Now there’s two objects there. One of them is the clay and one of them is the statue. Is that just clay that’s been shaped into the statue, so the clay is the statue? Or is [it] two objects that are somehow superimposed?

As a game designer, you actually have to implement that. It’s one or the other. You make the decision, which one you’re going to go with. Similarly, the story of — was it Theseus’s ship? Someone’s ship. Or Lincoln’s axes. Was it Washington’s axes? Never mind. Basically, it’s the case where you have a ship, an old wooden ship, and it starts to get a bit worn out, so you take a few planks out and replace them. Then you notice the mast going a bit, so you take the mast out, replace the sails. Eventually you’ve replaced the whole ship. Is it the same ship?

Furthermore, what if someone collects all the pieces you threw away and sticks them all together to make the original ship out of the same pieces? Is that the same ship? Or a different ship? If it was a magic ship, to which ship would the magic be attached? The one that’s been gradually replaced or the one that’s been built? These are questions which philosophers can discuss forever, and indeed have, and probably still will. But when it comes to game development, you have to implement it. Which of these are we going with?

Game developers have an insight into what it means to be someone who controls the physics of a reality, if you call that a god. Because they have an insight, that means they can say things which may be of interest to philosophers and theologians.

The reality of The Sims

Above: The Sims 4

Image Credit: EA

GamesBeat: We have god games like the Sims. In that sense, the player in a god game is almost like a game developer creating a game, where they create the entities in the world. Is there a difference?

Bartle: There is a difference, yes. The difference, in a god game, yes, you have the powers of a god, except for the powers of creating the reality. A bona fide, fully powered god can change the world. If you’re in, I don’t know, the Sims or something like that, a game where you’re able to influence characters by changing the world about them, you’re only changing the world within the constraints of the program. It doesn’t matter what you’ve got in the Sims. You’re not able to change the code that underlies it. But the developers can.

GamesBeat: I guess there’s an interesting hierarchy here, where you have the player in the Sims, the designer of the game, and then Andrew Wilson, the CEO of Electronic Arts, who’s the god over the game developers telling them what they can and can’t do.

Bartle: [laughs] Well, he’s not a god. He can instruct them. But he operates within the same physics as the game developers, the physics of reality. If the game developers say, “We’re just going to sit here and create the game we want,” he can sack them, but if all the developers get together and say, “No, we’re going to stay here and barricade ourselves in the office and finish the game,” then he has to go call the police. Then you have people who say, “No, we want this game finished,” and they’re all rooting for the developers. They go out and disarm the police and eventually you call in the army and there’s riots. But ultimately they all operate within the same physics of reality. They’re attempting to use the physics of reality to control the behavior of people in reality.

Now, the other thing developers could do is go over their heads. “Okay, I think there’s a higher power in a higher reality and I shall appeal to that. I’m going to pray you don’t do this.”

Who is responsible?

Above: A Richard Bartle slide.

Image Credit: Richard Bartle

GamesBeat: I guess what we’re getting at is, who’s responsible? The player has certain responsibilities and certain ethics. But so does the game developer, if they allow the player to do certain things, or give less freedom to the player. They’re marshaling their responsibility and their own sense of ethics.

Bartle: Game developers are an interesting situation. There’s a paradox about game design, which is that you impose constraints on what the players can do in order to free them up to do things that they couldn’t do if you hadn’t put the constraints on them. When you’re playing a game, you can behave differently to what you do in reality, because the game gives you that protection. It’s a frame, they call it.

When you’re playing a game, your behavior doesn’t have the same impact as it does in reality. It’s the same as if you’re an actor on a stage. If you’re an actor on a stage and you start using racist language, well, if that’s part of the play, you’re protected. If you suddenly start shouting, “There’s a fire!” and that’s not part of the play, and there isn’t a fire, then suddenly you’re liable. But if it’s part of the play and you start shouting about a fire, well, you’re fine, even if people get trampled to death trying to get out from the imaginary fire.

In game design, we impose these constraints, but the constraints allow you to operate in ways that you couldn’t normally. In MMOs, which is my field, they enable you to act in ways that, in real life, you couldn’t. But in so acting, you gain a better understanding of yourself, and so that affects what you might do in real life in a good way. That’s the theory anyway.

Obviously there’s responsibility. Because you could weaponize games, if you really wanted to. You could do an awful lot of things with them. I was in a group at Project Horseshoe where we considered ways to use games badly. It’s actually very easy to use games badly. If I wanted to create a game that would, I don’t know, give people carpal tunnel syndrome, I could. I could gaslight them. I could ruin their lives.

Fear of the future

Above: A character in Until Dawn. Video game characters are looking amazingly realistic.

Image Credit: Sony

GamesBeat: It sounds like a Black Mirror episode.

Bartle: Yeah, yeah. We didn’t publish the paper, because if we did someone might act on it. We didn’t really want that. Game developers and designers, as it turns out, are on the whole quite ethical.

GamesBeat: You just got us into a loop there.

Bartle: My main aim, eventually, in the whole system, was to provoke people into think[ing] how they would behave if they were a god of NPCs. What are the right things to do and the wrong things to do? And then for them to say, “I’m not a god of the NPCs, but in reality I’m an NPC. How do I believe any god or gods who may or may not exist in our reality — how do I think they’re behaving? Is their behavior ethical by what I’ve just figured out using this thought experiment where I’m a god?” That was the point.

Are sapient game characters property?

Westworld's hosts are disposable.

Above: Westworld’s hosts are disposable.

Image Credit: Warner Bros.

GamesBeat: If I assert that the game characters are my property — I bought them with my $60 for the game — can I just do anything I want with my property? That’s one question. I guess we’re getting to this day where, with sapient AI, the AI is so good that we’re no longer faking it. Then it seems to cross that line from property into something else.

Bartle: Yes. If you say, “I own this game and I can do what I like with it, because I own it,” well, actually, no. There are some things you might think you own, but you can’t do anything you like to. Children would be something that springs to mind. They’re my children, using the possessive, but I can’t just — yes, these children wouldn’t exist if I hadn’t gotten drunk that night. That doesn’t mean I have a full right to them.

If I create a game as a designer and the game’s got intelligent NPCs, then I sell that game to somebody else.

I’m not selling the NPCs. I’m just selling the world in which the NPCs live. But what happens when you lose interest and stop playing? All those characters are going to disappear and die? Did you just kill all those characters? That’s something we don’t really have an answer for at the moment.

Morally considerable beings

Detroit: Become Human starts with Connor the police negotiator.

Above: Quantic Dream’s Detroit: Become Human pushed the edge on face animation.

Image Credit: Sony

GamesBeat: In your talk, and in the book as well, what sorts of things came to mind when you were asking this question of whether the sapient NPCs were morally considerable, and whether a code of ethics should apply to how we behave toward them?

Bartle: The first thing is, if something itself has morals, then you can be pretty sure it’s morally considerable. If we create a virtual world and the NPCs develop their own set of morals — they won’t harm people for no good reason, they’ll be kind to animals, all the things we would consider moral — then if those are things they’ve developed as a code of behavior, then it shows they have morals. In that case we should treat them as moral beings.

Then, if we do treat them as moral beings, they have to appear somewhere in our hierarchy of moral beings. Do we consider them the same as us, because they’re as smart as us or smarter? Do we consider them the same as animals in our world? If the trolley problem for here was save the dog or save the virtual equivalent of Mother Teresa, what do we do? Do we save the real living dog or the, as far as she’s concerned, real live living saint?

When we make these decisions — we’ll save the dog over the saint — then suddenly you have to think, “Just a moment. We’re NPCs. That means any god of our reality who has a dog can switch off our reality and kill us all because their dog was in danger.” That doesn’t sound right, does it? We would probably put up quite a stiff argument against our world being destroyed to save a higher entity’s dog. Likewise, the people in the virtual world would. They’d also argue that they’re on par with us.

GamesBeat: Do they also have to have some measure of free will? Or is the fact that they have morals enough?

Bartle: I would say that they’re not going to develop morals unless they have free will. If you just coded in the morals, then they don’t have morals. It’s just a behavior. Morals are where you have the ability to decide between doing the right thing and not doing the right [thing]. Some of them won’t do the right thing. If everybody does the right thing, that’s part of the physics of the world.

Creatures with free will

Pete Billington and Jessica Yaffa Shamash talk about Lucy at the Virtual Beings Summit.

Above: Pete Billington and Jessica Yaffa Shamash talk about Lucy at the Virtual Beings Summit.

Image Credit: Dean Takahashi

GamesBeat: So you’re talking more about emergent behavior than hard-coded behavior for these NPCs. They can’t trick us with hard-coded responses.

Bartle: No, no. I’m assuming that these creatures have free will, or at least from their perspective have free will. They are as smart as us, rather than Neanderthal-smart or children-smart. They’re as smart as adult humans or smarter.

If it’s emergent, then, well, actually the nature of their intelligence is another one of the topics. It could be emergent, emerging from the interactions you create in the world, or it could be that we’ve created it and we put it on board with each little NPC. Each of them is a separate entity on a machine that controls the NPCs, but they don’t know they’re on a separate machine. Or we could have one huge AI that controls all the characters. They all think independently, but they’re all connected because we have a planet-sized computer somewhere doing the thinking for all of them.

The thing is, if we did have that, then there would be some things that would be so easy to implement, like telepathy, that you can argue that because we don’t have telepathy in reality, our minds in reality are not controlled by an external computer. You can do some arguments like that. But however you implement the intelligence, eventually we will have AIs that are as intelligent as us.

Where is the line

GamesBeat: So if physical form doesn’t matter–this gets to be very tricky when you try to draw this line. This is human, this is intelligent, and this is not.

Bartle: If you’re saying we’re only interested in humans, not interested in a computer that’s sapient, even if it’s much smarter than us — we’re not interested in it because it’s not human? Well, that’s fair enough, but then you have to ask the question. That means that if there is a god or gods of our reality, they can treat us with exactly that same disrespect. They can say, “You aren’t Asgardians,” or whatever particular theology you believe in. “We can do what we like to you. You’re just bits in a database.” From their perspective, we are just bits in a database.

Whatever we decide we’re going to do to our NPCs, we can say, “Well, then a higher reality, if there is one, could treat us the same way. Maybe we’d better be nice to our reality, because we don’t want people higher than us treating us how we treat the people below us.” That was one of the things I was hoping to get people to think about.

World of Warcraft: Legion

Above: World of Warcraft

Image Credit: Heather Newman

GamesBeat: There’s a certain amount of fiction writers who believe that if we create these things, they’ll take over from us. They’ll eventually be smart enough to outwit us and eventually become the dominant species.

Bartle: That’s if they don’t fall out amongst themselves. Once you give things free will, they have free will. Some of them will want to take out the other ones, because that’s what people with free will sometimes do. In terms of virtual worlds, it doesn’t matter how smart an NPC is. It can’t affect our world unless we let it. If they have some kind of amazing powers of persuasion, I guess they could persuade a player to go do something for them. But they have no physical ability to do anything unless we give them that ability.

We could, for example, build a robot in our world, a human-looking robot, and we could give control of that robot to an NPC in a virtual world. Suddenly the NPC is able to perceive our world and act in our world.

They would be able to do things, and if they were super geniuses, they could go off and affect our world in ways that we perhaps would wish they wouldn’t. But unless we actually give them access to our world, they have no more access to our world than we have to Mount Olympus or heaven or anything else you want to call it.

How to behave in a virtual world with AI

GamesBeat: Unless we actually create Westworld.

Above: In a trailer, a Westworld model invites you to live without limits.

Image Credit: HBO

Bartle: Well, if you create Westworld, that’s part of our reality. If we create smart drones that can decide who and who not to kill, that’s within our reality. That’s the “robots are going to rise up and kill us all” argument. But what I was looking at here wasn’t robots in our reality. It was about how we as the gods of a reality should treat the NPCs within that reality. They’re super-smart, but they’re contained. They can’t get out. What obligations do we have to them, if any?

GamesBeat: I used to have a moral code in the games I chose to play. I felt like it was okay for me to be the good guy, not to be the bad guy. Something like Grand Theft Auto gave me a great deal of angst. If I had to go shooting police officers in the game, then I felt like — this, to me, is something other than fun. This is not enjoyable.

And yet by the time Grand Theft Auto IV and V came out, I thought about this more, and I felt like, “Well, these things seem artistic. They’re good games, among the best games anybody’s making. The characters are very interesting. Maybe I’m wrong about this notion. Maybe the only thing I need to remember is the difference between fantasy and reality.” If Grand Theft Auto V is fantasy, I can behave like Trevor, the really bad character, and just do the things I think he would do, because I’m playing his character. That might be okay behavior for a game player. I don’t know. You get to then fully appreciate the artistic creation.

Bartle: You do, but the thing is, an artistic creation is trying to say something. If it’s not trying to say something, it’s not an artistic creation. Artists create works of art in order to say something to the people who consume that work of art. If they could say it in a different way, they would, particularly in the case of games, because they’re so expensive to make. If they’re attempting to say something meaningful, and the only way you can find out is by playing the game — by playing the game you pick up on what they’re saying — then that’s the art of game design.

If what they’re trying to say is, “You can be who you want to be, but do you want to be who you are?” then that’s a kind of question that’s an interesting question. By providing you with a number of characters of various levels of morality, they could be inviting you test your own morality, and for you to come up with your own excuses as to why the things you’re being asked to do — which you don’t want to do — are legitimate.

But what I would say is if they’re going to do that sort of thing, then you need to know before you start playing the game. That’s the kind of thing where they can’t spring it on you. And they don’t with Grand Theft Auto. Everybody knows that it’s that kind of game. But if you were playing a regular, ordinary — I don’t know, a Japanese RPG or something like that — if you’re trying to defeat the enemies, and then suddenly you find yourself having to maybe rape somebody, well, hold on just a moment. When I bought this game I didn’t know that was going to happen. I knew I was going to have to kill bad guys and ride chickens, but I didn’t know that I was going to be asked to do that. That’s not right.

You need to know in advance, roughly, where the boundaries are in a game, if you’re to accept it. If they say, in the beginning, that this game has no boundaries, it’s very dark, anything could happen, well, perhaps it will happen. I remember being very annoyed with one of the Elder Scrolls games, when I got bitten by a vampire. You don’t find out until it’s too late to do anything about it, and then I had to go around sucking blood from beggars to stay alive. I thought I should have been warned about that. I know some people like the vampires, but I didn’t want to have to go out and suck blood. That wasn’t something I was told I might have to do — to be, as my character, that kind of thing.

Above: Westworld VR experience at Techcrunch Disrupt a few years ago.

Image Credit: Dean Takahashi

GamesBeat: I guess if they’re teaching us a lesson in some of these things, teaching us about morality or the choices we make, then it seems like that’s a good intent.

Bartle: It is, so long as it’s not just a cover. So long as I’m not just saying, “We just did this to test your morality.” No, you didn’t. There’s another case in World of Warcraft, where — sometimes it’s okay to show you a bad boundary — to break a boundary in order to show you where the boundaries lie. They tempt you to do something, and you do it, and then you get punished for it afterward. Then you get told, “This is where the boundary really lies.”

That sort of thing is okay, and that does happen a couple of times. There’s an early quest, or there was in Classic, where some kind of demon thing tried to get you to do something you knew you really shouldn’t. Then you do it and get caught and get told off, so now you know where the boundaries are. There was a later one in Wrath of the Lich King where you were asked to kill a whole lot of Alliance people based on the memories of Arthas, the Lich King. In order to find out just how bad the Lich King is, you step into his shoes for a while. That’s okay, because again, I’m protected.

But there was another quest where you had to torture somebody. You had a pain stick, and you had to go and hit the person with the pain stick until they told you where somebody was. Basically you were torturing people. I didn’t like that, because when I signed up for World of Warcraft, I wasn’t expecting to be asked to torture people as part of a quest. If I had an option — torture or not — and I decided not to and there were consequences, fair enough. If I decided to torture and the consequences were worse, because as is often the case with torture you get the wrong information. Again, that would be telling people that torture isn’t good here. If you do torture someone the consequences are worse.

But that isn’t what happened. It was just a case of, let’s see, what’s the next step? Go and hit that person a couple of times and then learn where the archmage is hidden and go arrest him. Those kinds of boundaries there are — you can see how it happened. They were writing a thousand quests and looking for different ways to use the limited number of tools in the box, and that’s just what happened. Someone made a mistake. It wasn’t done deliberately. At least I hope not.

Emergent behavior and consequences

GamesBeat: I had some similar experiences in Red Dead 2. I shot a dog by accident, and the sheriff came after me and wanted me out of town. I didn’t go out fast enough, and so he started shooting at me, so I fired back and killed the sheriff. Then a whole posse came after me and killed me. I learned the lesson. You shouldn’t shoot dogs in this game.

Bartle: [laughs] That’s quite often the case. The guards in the town are completely incapable of staving off the bandits raiding the town, but if you shoot one of their chickens, suddenly they’re impervious to all pain and they’ll arrest you no matter what you do. They’re superhuman. Anyway. Those aren’t really AI issues, but they are morality issues.

Dutch Van der Linde (left) treats Arthur Morgan like a son.

Above: Red Dead Redemption 2 had a very realistic game world.

Image Credit: Rockstar

GamesBeat: One thing I’ve said before to game developers — it’s this quote from the Kurt Vonnegut novel, Mother Night. He says that for once, he knows the moral of his story. The book is about an American spy in World War II who does too good a job at his cover job of being a Nazi propagandist, and he’s ultimately hung for it. The moral of the story is, we are what we pretend to be, so we must be careful about what we pretend to be.

I always thought it was interesting to think that you can tell yourself that you’re just playing a game, playing a role, but if there are consequences to that, then you’re sort of kidding yourself.

Bartle: Yes, you’re lying to yourself. There are consequences to playing games. That’s the whole reason people play MMOs. It gives them a freedom to try on a new identity, a new version of themselves that’s like themselves, but not quite the same. It enables them to experiment with how to behave in a new scenario. Most people will come out of playing them with a better sense of self than others. The trouble is that if you’re a jerk, you’re going to come out as a better jerk than you were when you went in.

There is something called the Proteus effect, which suggests that when people are role-playing, whether in a game or in real life, it can change their real opinions.

What this shows is that when people play a character, they can be influenced by the character that they’re playing. Even things like — in ice hockey, one of the American sports, people who are wearing black uniforms are fouled more often than people who aren’t wearing black uniforms, and commit more fouls. The black uniform is saying, “We’re the bad guys.” When people are wearing some kind of a skin, they’re influenced by the skin they wear.

If you’re playing an MMO, well, you’re sort of being influenced by the character you’ve chosen to play. Now, in part that’s because you chose to play that character. You’ve chosen one that would somehow give you a return. But nevertheless, it does mean that there are some possible implications. If someone buys a game and there’s only one character and the character is not a very pleasant one, then people could be unwillingly having their opinions subtly altered by playing that character.

Should we hold ourselves back?

Pete Billington and Jessica Yaffa Shamash talk about Lucy at the Virtual Beings Summit.

Above: Pete Billington and Jessica Yaffa Shamash talk about Lucy at the Virtual Beings Summit.

Image Credit: Dean Takahashi

GamesBeat: Should game developers, and technologists in general, hold back from creating sapient AI? Does that carry some great risks with it? I don’t know if that’s the risk that they might harm us, or the risk that they can be abused, depending on the context. If we can foresee that this AI is going to get better and better, should we not do this?

Bartle: What I would say is, there’s a larger question. That is, is it actually moral or ethical to create an intelligent being in the first place? Never mind what the dangers are, because if they’re intelligent, then they’ll probably develop their own sense of morality and it will probably be in line with ours, because every time any culture in the world has had to develop a morality, they basically come down to the same set of core rules, humanist types of rules. The larger question is, should we create intelligent life anyway? Ignoring anything that it could possibly do to hurt us, assuming that isn’t going to hurt us, is ethical to create life?

We’re going to create this intelligence and we’re going to set it in an environment and it’s going to be suffering, because that’s what happens in environments. Alternatively it’s not going to be suffering, in which case it’s going to be bored. Eventually it’s going to die, or it’s going to change from what it originally was. That’s the first question we should be asking. Is it ethical to create life? Now, we create life through reproduction. But that’s not quite the same as creating an independent life, a separate life form.

As to whether we could put limits on it with some kind of Geneva Convention for AI, at the moment the technology is advancing very quickly. But the computer technology upon which it runs isn’t. The energy requirements for some of these neural networks being put out there are vast. Training these things, getting these genetic algorithms to fight each other forever, these things take a lot of computing power, and that means a lot of energy. It’s not as if everyone can do it.

Eventually, when we have unlimited energy, they will. But what happens up until that point? We’ve managed to keep the number of nuclear powers down quite low. It’s still fewer than a dozen. But we can’t stop nuclear power. If somebody wants to make it, they will. Likewise, if we created a treaty that says no superintelligent evil AIs will be created, that means some mad dictator will think, “Great, I’ll have the only one!”

In practice, all we can do is delay and slow down. But by delaying and slowing down, then we can develop other ways to think about things. We can develop moral codes. We can decide what to do when this happens. I am in favor of slowing it down. We’re not close yet to having sapient AI. Frankly, the world’s in more danger from some badly written code at a nuclear power station going haywire than it is from AI. There’s plenty of other ways that computers can destroy the world.

GamesBeat: We have some time to think about it, then.

Bartle: I have time to die before it happens, I think. [laughs]

GamesBeat: Well, they might bring you back, though. Upload a version of you to the cloud and keep it there.

Bartle: They could upload a version of me and then bring back multiple copies, so there’s a dozen of me in the world, or 200, or an army of me.

GamesBeat: Then they can see how each one behaves.

Bartle: Well, then it wouldn’t be me, would it?