A video game may have laid the groundwork for true artificial intelligence (A.I.).
What if “bots” (computer-controlled enemies in first-person shooters) could disguise themselves as human? Would that not pass Turing’s test, which measures A.I. by virtue of imitating human behavior?
Specially designed bots at the BotPrize Unreal Tournament competition apparently did just that, convincing over half their flesh-and-blood opponents that they were, in fact, human.
The best way to pass as a human is to behave like one, and this was the design philosophy of the victorious digital warriors. In the video below, one of the human “judges,” “Miguel,” battles “Ty” (in actuality, the UT^2, one of the A.I. bots), and Ty matches Miguel step for step before fragging him.
The final leg of the contest saw four human competitors duke it out with an equal number of built-in bots and six digital entrants (bots) attempting to masquerade as human. Each combatant had a “judging gun” in addition to his normal complement of weaponry, and the goal — for the artificial players, anyway — was to pass themselves off as human. Two of these digital warriors managed to earn a 52 percent “human rating,” which exceeded any human contestant.
Think about that: A machine (two of them, in fact) appeared more human than actual humans.
Cue Skynet killer-robot scenarios.
Turing’s test and A.I.
While Alan Turing — circa 1950 — couldn’t have foreseen the sophisticated interactive entertainment of today, and his celebrated “test” was largely theoretical, the BotPrize winners certainly exhibited a high degree of artificial intelligence.
But did they actually pass the Turing test?
That’s their claim to fame, after all. University of Texas professor Risto Miikkulainen who, along with doctoral students Jacob Schrum and Igor Karpov, created one of the two winning bots, UT^2, described BotPrize as a “Turing test for game bots.” But does it pass this crucial A.I. threshold?
To answer that, it’s important to understand what Turing — and by extension, the BotPrize competition — was trying to accomplish. The goal wasn’t to defeat humans in a game of skill. Leave that to Deep Blue and Ninja Gaiden Black. Rather, the idea was to blend in and become indistinguishable from humans — which essentially means dumbing it down.
Sixty-two years ago, computer scientist Alan Turing devised a challenge for measuring artificial intelligence. His test was a variation on the “Imitation Game,” where an interrogator (of any gender) must identify the sex of two players, a man and a woman, through a line of questioning (and without seeing the players). Player A’s job is to trick the interrogator into making the wrong decision, while player B is supposed to assist them.
Turing’s version substituted a computer for player A, while player B could be either sex. At the time, he made the following bold prediction:
“I believe that in about 50 years’ time, it will be possible to program computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning.”
He also claimed that, by the year 2000, we could speak of “thinking machines” without anyone contradicting us. We’re about 12 years late, but we may have finally achieved Turing’s goal. And it’s all happened fairly recently.
Indeed, Miikkulainen noted that “just a few years ago, the kind of computing power for BotPrize wasn’t possible.”
Designing a bot to fool human players
The definition of machine “intelligence” varies wildly, but in many ways — BotPrize included — we’ve already achieved that. If the goal is to exhibit human-like tendencies and fool the interrogator (or player, as it were) into accepting an artificial construct as real, then the UT^2 passed the Turing test.
To gain some perspective, I interviewed Miikkulainen, and our discussion yielded fascinating insights on the future of game development and humanity in general.
Over the course of five years — the UT^2’s incubation period — Risto Miikkulainen and his team learned a great deal about human nature. Above all else, people are stubborn, overly persistent, and irrational. They’re inefficient. As Risto mentioned, “Humans aren’t fully optimized.”
When playing first-person shooters, people make illogical decisions for a number of biased, subjective reasons (and then teabag each other on X-box Live).
We pursue vendettas. We become attached to treasured instruments of destruction (even when they’re inadequate to the task at hand). And we make mistakes. Lots of them.
So the team needed to train the UT^2’s neural networks to reflect these imperfections. They did this through a two-pronged approach — by integrating previously observed human behavior and an “evolutionary reinforcement learning method” called “neuroevolution.”
Through a “survival of the fittest” vetting process, the team “evolved” a mature bot that could pass as a human. This wasn’t your ordinary, everyday artificial intelligence.
Miikkulainen was quick to remind me that you don’t program neural networks. Rather, you train them through learning mechanisms. And one of the stereotypical robotic behaviors — if we can indeed refer to robots as behaving a certain way — is cold, calculating precision. The team applied certain “resource restrictions” so the A.I. would appear more human.
When they began this project five years ago, the best bots scored a 20 percent to 25 percent human rating (with actual humans 10 percent to 15 percent higher). At the BotPrize competition, UT^2 and another combatant, Mirrorbot, achieved a 52 percent human rating.
Considering the bland patterns and on-rails chicanery we tend to associate with bots, this is a huge accomplishment.
To my own dismay — and the broken hearts of sci-fi geeks worldwide — UT^2 isn’t a “learning computer.” It can’t recognize new patterns, improvise, or formulate new strategies. And it doesn’t have a will of its own (though this is probably a good thing).
In other words, UT^2 doesn’t “learn” during its performance. Rather, it “evolves” ahead of time. According to Miikkulainen, “the neural network is not infinitely adaptable,” and it takes some time to change course. This was by design.
Miikkulainen stressed throughout the interview that human players like to observe behavior that mirrors their own (this was the namesake of Mirrorbot, in fact). So UT^2 employs a unique “strategy”: Instead of seeking the optimal approach to each scenario — which would flag it as a robot — the A.I. mimics the actions of its opponent.
Not coincidentally, the learning mechanism mimicked human limitations, which enabled the A.I. to disguise itself as a human. And it accomplished that singular goal admirably. As a gamer, technologist, and bona fide geek, I’m salivating for all the potential applications.
I’ll settle for escort missions — with smarter A.I. — that don’t give me homicidal thoughts.
UT^2 isn’t the Terminator or even a benign fictional android like Star Trek’s Data. But it’s a start. As Miikkulainen pointed out, gaming environments are a good stepping stone. In masquerading — successfully — as a human, UT^2 honored the spirit of Turing’s test and took an important first step in the development of sophisticated artificial intelligence.
The UT^2 software package (including the source code) is available for download at http://nn.cs.utexas.edu/?ut2.
VB's research team is studying web-personalization... Chime in here, and we’ll share the results.