One of the grand applications of computers is as adversaries in board games. Deep Blue, the legendary chess computer, beat Garry Kasparov in 1997. Chinook, a computer from the University of Alberta, plays checkers so well that it cannot lose. WOPR, the AI from War Games, is trained on all manner of board games before set to analyze nuclear war scenarios.

I was struck by the idea of a computer “knowing” how to win a game, and set to write my own checkers AI. “Knowing” of course is a bad use of the word. A computer does not know things. Instead, it creates a logical formula, and sets about crunching out its solution.

Before we begin looking at the algorithm itself, consider a game of checkers as an abstract collection of board positions, each board position representing a score. In positions where black has the advantage, the score is high for black and low for red, when red has the advantage, the opposite. How are these scores determined? In a perfect world, every possible board position could be analyzed and placed into a giant tree and connected together, each possible move acting as a string to another board position. Those positions leading to a win for black, without a chance for red winning, are good for black. The opposite is true for red.

However, in real life we can't do this, due to a limitation on processing power and space. According to wikipedia, American Checkers has 10^20 positions, far too many for my computer to store or analyze. Instead, programmers use some formula to determine a board's value. This is called a heuristic. In my algorithm I chose to find the difference in number of pieces, Kings counting as two. If black has more pieces, I assume black is winning. At least that is the idea. More complicated heuristics consider the position of pieces on the board, pieces in corners or on the sides (invulnerable positions) contributing more to a board's score, or do any other complicated analyses. I stuck to something short and simple.

So now, given any board position, we can come up with a value.

The minimax algorithm lets us use these values to come up with the best move given a certain number of possible moves, by looking into the future.

Consider this: At the start of the game, the first player has 7 possible moves (each involving shifting a piece in their first row up by one.) We can look at the value of each of the resulting board positions and pick the best one.

This, however, is pretty lame, since each move has the same board value (at least in my setup), and doesn't get you very far. Instead, we continue on into the future of possible moves and find that for each move, player 2 has 7 possible responses. That makes 49 possible board configurations 2 moves into the future. Now, since player 2 is responding, we will assume that they are intelligent and pick the response that is best for them. Therefore, if we were stopping here, instead of picking the board position that maximizes the score, we pick the board position which minimizes the score (hence the mini-max name).

Now that we know what they are going to do, we can pick the move that leaves them with the worst set of moves for them.

What the minimax algorithm does is look a certain number of turns (aka, depth) into the future, and using successive minimizing and maximizing, find the one move that screws the other player over the most, and simultaneously helps themselves the most.

While this approach is simple, it has its setbacks. First, any human can tell you that most of the analyzed moves are wasted cpu time, because the majority of those configurations are dumb board positions that no one would ever play through. Additionally, if for every move the other player has about 7 responses, that means the number of possible ending positions is 7^d, where d is the depth of the search. Looking only, say, 6 moves into the future results in 117,609 board positions! (My computer runs out of Java heap space if I check more than 8 moves in the future, about 5,750,000 board positions.) A human can get rid of all but the most likely ones in seconds, whereas a computer blindly checks them all.

It is important to remember that computers are simply mathematical constructs, but if you steer one in the right direction, they manage to cast a glimmer of brilliance.

This document is the output for a game with each side looking 8 moves into the future. It ends in a draw, where each side would oscillate forever with its kings. r = player1, b = player2, capital letters signify kings. Eventually I will get around to making a nice graphical interface where you can play against it. As always, the code is online. You can find it here.