The Pursuit of Perfect Play - Human Creativity versus Brute Force AI
The Pursuit of Perfect Play - Human Creativity versus Brute Force AI.
Way back in 1956, an IBM researcher by the name of Arthur Samuel developed a computer program designed to play checkers. And although it wasn't the first computer program to do so, it was by far the most advanced. On hardware larger than your fridge, yet many times weaker than whatever cellphone you may carry, he had created a self-adaptive program. It improved by playing games against modified versions of itself, and allowed the victorious versions to survive. Let me re-iterate that this was done nearly sixty years ago on a machine that used vacuum tubes for memory storage. Anyhow, the program reached a respectable level of performance and IBM's stock climbed 15 points overnight when it was demonstrated to the public. From then on out, using games as basic frameworks when teaching computers problem solving has been a popular approach, and they've gotten better at it. A lot better.
There is a concept in game theory called "Perfect Play", which means exactly what you think it means: Performing the strategy or behavior that results in the best possible outcome for you regardless of your opponent’s response. If it is possible to play perfectly from beginning to end, then that game is considered to be solved. This is exactly what happened to checkers in 2007. Researchers from the University of Alberta announced that they had improved upon their program "Chinook" to the point where it could not lose a game. At the time, "Chinook" had already been proven to be superior to humans for well over a decade, so no one really doubted its excellence. But what made this announcement so impressive was that if no mistakes had previously been made by either player, the game would always end in a draw. "Perfect Play" had been reached.
Now, I can hear that some of you are still somewhat unimpressed, "it's only checkers, big deal...". Well, let me just point out that the amount of legal positions in checkers is estimated to be around 10^20. That's a one followed by twenty zeroes. Also, that's currently just an estimation, nobody truly knows yet because no one has had the time to go through them all. In theory, if you had a computer that could analyze 200 million legal positions per second, it would still take that computer 15,854 years to do them all. Coincidentally, 200 million is the amount of moves per second that Deep Blue, the famous chess computer that beat grandmaster Kasparov, could calculate.
So, computers are rapidly getting smarter, no surprise there. But why should esports have anything to fear from the advances in Artificial Intelligence? There are still plenty of games in which we humans dominate, some even on the home turf of the computer. Strategic digital games such as StarCraft are still mastered by humans at the top levels, especially if the human happens to play Terran (OP!). Sure, the most difficult (non-cheating) AI opponents may still be able to beat an average Joe, but they consistently fall at the might of human professionals. Our human brothers spend long hours studying the game, chasing after that illusive concept of "Perfect Play". They are practicing to be able to execute a strategy so powerful that no matter what the actions of the opponent is, our strategy will always triumph.
However, there is cause for concern that the race to discover that strategy will not be won by humans. In early 2009, the Brood War API (BWAPI) was released, allowing external programs easy access to in-game information. It was quickly picked up by AI researchers from all over, and a tournament was set up for the various teams to compete in. This competition was won by "The Berkeley Overmind", a zerg AI that utilized the mobility of mutalisks to great effect. Take a look at the following clip, showing how the AI individually micro's every single mutalisk to repeatedly attack and fly away, minimizing the damage taken in the process:
That kind of precision and unit control is unattainable by us mere mortals, Korean or not. We may still have superior decision making, but how long will that really be true when complex games such as checkers have been solved? A sufficiently powerful AI could learn from the replays of our most talented players, and run countless simulations to correct all the small mistakes. Computers have already been taught how to read the game manual for Civilization, resulting in an immediate 37% higher win rate. Imagine the defeat of having our pro-players trying to ape a powerful strategy produced by an advanced AI, instead of having had the creativity and depth of intellect to discover it ourselves. In a sick turn of fate, our human player would then have become the robot, mindlessly trying to perform the actions dictated by a computer developed strategy.
But, there is hope. Serious scientific research is being made into how we humans acquire expertise. More specifically, how we acquire the kind of advanced multitasking expertise required to play high-level StarCraft. Earlier research performed at the University of California using Brood War showed a strong correlation between attention distribution and winning, meaning that players who were better at distributing their attention across a large area on the map were more likely to win. Today, science is taking it one step further. Over at the SFU Cognitive Science Lab, researchers are trying to figure out what exact skills differentiate amateurs and professionals, as well as how we acquire those skills. When we learn to play the piano we know that practicing scales is more effective learning than trying to play actual music. Imagine if we could find a similar practice tools for StarCraft, a way to improve faster instead of grinding away long games on the ladder.
With a better understanding of the skills required to play top level StarCraft, as well as how to acquire those kind of skills quickly, we would be able to increase the average skill level of our games and accelerate our efforts in the search for a “Perfect Play” strategy. Once there we would be unbeatable, just like Chinook... it would be a great accomplishment, even if it only would last until Blizzard releases a new balance patch.
But our own performance aside, the overarching question is: Will StarCraft ever be "solved" in the same way checkers has been solved?
The uncomplicated answer is no. There are a number of reasons why “solving” StarCraft is highly unlikely. One reason is because unlike checkers and chess, StarCraft does not have "Perfect Information", meaning that the computer does not (should not at least) have access to all in-game information. Fog of War is a powerful thing and adds an aspect of chance that constantly must be compensated for. In theory you can still play "perfectly" given the limited information available, but the mere addition of it adds significantly to the complexity of the game. Also unlike chess and checkers, StarCraft has different maps. So each map would at least have nine different Perfect Play strategies corresponding to each of our nine matchups (ZvZ, ZvT etc). And just to drive the point home, we have not yet mentioned how complexities such as Asymmetry and Unlimited Opportunity affects attempts at calculating an optimal solution for every possible encounter... so yeah, Perfect Play is far far away.
For comparison, there are debates in chess research that indicate that in order to solve chess, (which can be argued is less complex) we need a breakthrough such as quantum computing or similar before we can even begin to attempt to solve it preperly. No computer as we know it, however built, would be able to compute the entire tree of possible move sequences of the game of chess. According to Wikipedia, the estimated amount of possible legal positions in chess is somewhere between 10^43 and 10^50. So as long as the laws of physics hold up, a traditional computer simply cannot crunch the numbers well enough to break it down.
But as the scientist who led the 16 year effort in solving checkers said: “Never underestimate the advances in Technology.”
Written by Joel "Offsajdh" Hakalax
Extra Credit Reading:
- In the future, when the inevitable robot uprising occurs, science tells us that StarCraft players are well adapted to handle the required disaster management to save the day. This is due to the overlapping skill-sets required for play and survival. Other fitting positions for retired StarCraft pro-gamers would be Air Traffic Control or Military Command. And people say video games are a waste of time... pffh!
- Don't forget to show your support to the SkillCraft project. Science and StarCraft in happy-super-awesome unison: http://skillcraft.ca/
- Marion Tinsley is considered to be the greatest checkers player who ever lived. He never lost a World Championship match, and only lost seven individual games in his entire 45 year long career. Two of those games were against Chinook in a match Dr.Tinsley actually won with the score 4-2 back in 1992. Two years later he and Chinook had a rematch, but six games into the match Tinsley had to withdraw, citing health reasons. He was diagnosed with cancer the same week and died seven months later. The New York Times wrote about Chinook and Tinsley here.
- More information on the development of "The Berkeley Overmind" in a wonderful article from Ars Technica can be found here