I finally finished the grueling task of creating a user interface to checkers, which was both mind-numbingly boring and thoroughly challenging. I am continuously reminded that any visual output in Java is a big hack, as Greg would say, but at least restricting myself to the Java libraries does allow me to experiment with how such things work.
Two weeks ago I wrote about Minimax, a simple algorithm which looks into future possible board positions to pick the move which minimizes an opponent’s good moves, and maximizes one’s own. I claimed that my algorithm had produced a proficient checkers player, but I was unable to really test it until I had completed the user interface. Now I have, and you can check it out here.
Unfortunately, I found that the minimax, as I have created it, performs rather poorly, most likely due to the simplistic nature of the board evaluation function (It judges how good a position is), and how my implementation leads to the same evaluation for many different board states.
Consider the following two board positions. I admit that I am not a great checkers player, but in my opinion the one on the left is fairly equal, whereas the one on the right has black within an inch of getting a King with several of white’s pieces about to be destroyed. However, in my current approach the two board positions are evaluated as equal, since the pieces are all that count.
Chinook, the famous checkers program which cannot lose, uses several factors to evaluate board position. Among them are, according to Computational Intelligence in Games, an interesting book I recently checked out, “(1) piece count, (2) kings count, (3) trapped kings, (4) turn, (5) runaway checkers (unimpeded path to king); and other minor factors.†Multiple other approaches might work, such as assigning pieces at the board’s edges a higher value than those in the center, or looking to reduce the number of possible moves for the opponent.
I tried the following, (based off this article), weighing a piece’s position using the grid:
This improved my checkers AI, but there is still a lot of improvement to be had. For some reason, once the AI gets a king it likes to jump it back and forth infinitely, getting nothing done when it could be advancing another piece. I have no idea why it does this, and am trying to figure it out. If anyone has any observations, feel free to post them.
However, my next undertaking will not be to follow the shoes of Chinook, with its human-crafted board evaluation function. Instead I will take the path of Anaconda, the program described in Computational Intelligence in Games, which uses a genetic algorithm and neural nets to evolve an expert level checkers ai out of essentially nothing.