Join GamesBeat Summit 2021 this April 28-29. Register for a free or VIP pass today.
Google and Blizzard are opening up StarCraft II to anyone who wants to teach artificial intelligence systems how to conduct warfare, because apparently I’m the only one who has ever seen The Terminator.
Researchers can now use Google’s DeepMind A.I. to test various theories for ways that machines can learn to make sense of complicated systems, in this case Blizzard’s beloved real-time strategy game. In StarCraft II, players fight against one another by gathering resources to pay for defensive and offensive units. It has a healthy competitive community that is known for having a ludicrously high skill level. But considering that DeepMind A.I. has previously conquered complicated turn-based games like chess and go, a real-time strategy game makes sense as the next frontier.
The companies announced the collaboration today at the BlizzCon fan event in Anaheim, California, and Google’s DeepMind A.I. division posted a blog about the partnership and why StarCraft II is so ideal for machine-learning research.
“StarCraft is an interesting testing environment for current AI research because it provides a useful bridge to the messiness of the real-world,” reads Google’s blog. “The skills required for an agent to progress through the environment and play StarCraft well could ultimately transfer to real-world tasks.”
Most notably, StarCraft requires players to send out scouts to learn information. To succeed, the player then needs to retain and act on that information over a long period of time with ever-changing variables.
“This makes for an even more complex challenge as the environment becomes partially observable,” Google’s blog explains. “[That’s] an interesting contrast to perfect information games such as chess or go. And this is a real-time strategy game where both players are playing simultaneously, so every decision needs to be computed quickly and efficiently.”
If you’re wondering how much humans will have to teach A.I. about how to play and win at StarCraft, the answer is very little. DeepMind learned to beat the best go players in the world by teaching itself through trial and error. All the researchers had to do was explain how to determine success, and the A.I. can then begin playing games against itself on a loop while always reinforcing any strategies that lead to more success.
For StarCraft, that will likely mean asking the A.I. to prioritize how long it survives and/or how much damage it does to the enemy’s primary base. Or, maybe, researchers will find that defining success in a more abstract way will lead to better results, discovering the answers to all of this is the entire point of Google and Blizzard teaming up.
And, of course, once we are dealing with the fallout of the A.I. realizing that its best strategy for winning is to overtake every computer on the internet to use as its massive, worldwide cloud-based brain, you just steer clear of my completely analog bunker out in the Rocky Mountains.
GamesBeatGamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. How will you do that? Membership includes access to:
- Newsletters, such as DeanBeat
- The wonderful, educational, and fun speakers at our events
- Networking opportunities
- Special members-only interviews, chats, and "open office" events with GamesBeat staff
- Chatting with community members, GamesBeat staff, and other guests in our Discord
- And maybe even a fun prize or two
- Introductions to like-minded parties