I’ve been developing chess engines as a hobby since high school.
At the moment I continue to maintain the last of this series of programs: an xboard-compatible chess engine which I call Avenger. It has an average search speed of 1.2 million positions/sec and can afford a search depth of 7 moves in Blitz games when running on a modern machine.
Avenger uses alpha-beta pruning (a modified min-max tree exploration algorithm) to enumerate the state space from a chess position and direct opponents towards their “worst future”. I will illustrate how this works with a simple example:
Assume that you’re playing a turn-based game against a friend. In the game, there are multiple bags with several items of different material values in each. The rules of the game are:
The bags and their contents are shown below.
Naturally, you are interested in getting an item with a high material value. Your opponent, however, is interested in the exact opposite: giving you one with a low material value. How do you proceed?
Some bags with high value items such as £100 might be tempting. However, since you only select the bag and your opponent selects the item then you will never get your hands on these: your opponent will never give them to you. Instead, what you will likely get is the worst item in any bag you select.
Your best strategy, therefore, is to select the bag whose worst item has a higher value than the worst item in every other bag. In this case, this is bag 3 because it guarantees a minimum value of £7 while bags 1 and 2 guarantee minimum values of £3 and £2 respectively.
This is called a min-max strategy and is used when players take turns to direct the outcome of a zero-sum game (e.g. chess, checkers and go). In this example, the state space is small and so a computer can explore all the possible outcomes quickly. In chess, however, the average number of moves per position is 36 and so a chess engine which needs to consider all the possible outcomes say, 7 moves ahead, will need to enumerate and evaluate 36^7 chess positions. If it can do so at a speed of 1 million positions/sec then it will take about 22 hours to make a single move.
Now here’s a neat trick to speed this up:
Assume you’re a computer who’s playing the game above with the bags. As a computer, you explore bags and evaluate items in a serial manner (i.e. one by one) which may take a long time if there are many options to consider. You’ve already finished peering through the first bag and found that the worst item is worth £30. You begin exploring the second bag and the first item you pull out is a measly £1. This situation is depicted below.
At this point, there is really no point in searching bag 2 further: you already know that it contains a very bad item that your opponent will select to give you. You therefore abandon exploring this bag, decide that you will not select it (given that you already have a better option – bag 1 with a guaranteed outcome of £30) and start exploring the next bag.
What you’ve effectively done is cutting down the search tree without sacrificing your evaluation accuracy. This algorithm you just used is called alpha-beta pruning.
Avenger (and all chess engines) do the same. When they encounter a bad move which the opponent might capitalize on, they declare this as a bad option and don’t waste time thinking about it. The saved time is invested in exploring promising moves more thoroughly and allows chess engines to search the state space to deeper levels.
On another optimization front, Avenger also represents chess boards internally with dense bitboard data structures. This enables it to generate moves and evaluate chess positions very efficiently using bitwise operations. The engine is written in C but many of the primitive bitwise board manipulation operations are implemented in x86 inline assembly to squeeze the very last bits of performance out of the underlying hardware.No Comments
During a 6-month internship at KADDB, two colleagues and I have designed and constructed a 3D scanner; a device that can build a 3D computer model of a real world object.
The scanner consists of a high precision laser depth sensor that is mounted on an assembly of linear actuators with two degrees of freedom. The sensor is swept across a 2D plane (facing the object to be scanned) and collected measurements are used to construct a depth array. The scanned object (which is placed on a high precision rotary stage) is then rotated, exposing another side of the object to the sensor and the process is repeated. The device schematics are shown below.
After scanning the object from a sufficient number of viewpoints, the depth arrays are mapped to 3D space to create a “point cloud” with a surface resolution of 9 points/mm^2. The point cloud is then filtered with a 3rd party software and used to construct a polygon-based model using Delaunay’s triangulation.
Below is what the output of this process looks like …
The collection of depth samples, inferring their 2D coordinates and mapping them into 3D space while accounting for mechanical misalignment and other sources of errors was no easy task. The figure below sums it all up.
This is another point cloud which has been color-coded to show the individual 2D depth arrays which have been collected from different angular viewpoints.
The device won the “Best Design Award” at the KADDB Technology Showcase 2009.
Source Code on GitHub:No Comments