Today I’m releasing Maverick 0.60. The main changes are as follows:
- Added support for Chess960
- Added basic king safety (this makes the playing style much more attractive)
- Fixed a problem with using the opening book with Arena
- Fixed an obscure bug which could crash the engine after a stop command
- Transferred source code to Github (https://github.com/stevemaughan/maverick.git)
- Made a bazillion tweaks which may, or may not, help!
In self play matches against 0.51 it shows an improvement of about 50 ELO. I’m hoping this will translate to a real-world strength increase of at least 30 ELO.
I’m now about to start working on improving the selective search!
You can download it on the download page
Last week Microsoft release a Community Edition of Visual Studio 2013. This is a free version of the Professional edition of Visual Studio 2013. Previously Microsoft’s free edition was Visual Studio Express. This only compiled to 32 bit and didn’t include a Profiler or Profiler-Guided-Optimization. The new Community Edition includes all of these goodies and can generate 64 bit executables. This is a big deal for chess programmers. It means you can easily develop and test your engine from within the same high quality development environment (I know we’ve always had GCC but this is more integrated than the mishmash of open source tools).
There are some constraints on who can use the Community Edition. You can only use it if your engine isn’t commercial, or if it brings in less than $1 million per year – I’m sure that covers every chess engine developer!!
You can find out more and download here.
The Holiday season is coming. I’m hoping to have more time to dedicate to chess programming. For the last six months I’ve been swamped with work – but I think there is a light at the end of the tunnel. More updates to come.
I’m excited to let everyone know about two new engines which are to be hosted on this blog.
The first engine is Fruit Reloaded. This is fork of Fabien Letouzey’s Fruit 2.1. Most of the new development (including SMP search) has been done by Daniel Mehrmann and Ryan Benitez. You can find out more here:
The second, engine is a big surprise. Fabien himself has been dabbling once again in chess programming. He’s come up with a brand new engine – Senpai! It’s a bitboard engine (Fruit was array based) with SMP search. I ran some quick tests on a beta version and Senpai 1.0 drew a 150 game match against Chiron 2.0. Although this is a small sample of only one engine, it implies a rating of approximately 3100 ELO on the CCRL scale. You can find out more about Senpai here:
Let the fun begin!
When it comes to Maverick’s evaluation function I’m frustrated and excited in equal measure!
My focus over the last couple of months has been to improve Maverick’s evaluation function. However I’ve found it quite difficult to improve on Maverick 0.51’s actual playing strength. Adding extra knowledge (e.g backward pawns) seems to add little in terms of playing strength. I’m struck by the massive number of ways to encode chess knowledge. I feel like a blind archer as I make wild guesses trying to calibrate the parameters.
There must be a better way!
Early on in Maverick’s development I came across Thomas Petzke‘s approach to tuning evaluation function. He uses a form of genetic algorithm (PBIL) to tune the parameters. PBIL optimization algorithms are really neat – they represents each number in binary format. Each bit of each number is represented as a floating point value between zero and one. As the system “learns” the floating point values are “trained” and gravitate to either zero or one based on the training data. In Thomas’ case he played a few games of chess to assess the fitness of a set of parameters. This is expensive – but ultimately game-playing-ability is the attribute we’d like to optimize. Sp maybe the training time is justified.
Back in 2000 I worked a lot with standard genetic algorithms. I used them to evaluate marketing campaigns. I think PBIL may be even better for evaluating marketing campaigns (but that’s a story for another day). I’m certainly interested in using them to tune Maverick’s evaluation function. The only problem is Thomas’ method take ages to complete (weeks!). I’d prefer a method which is quicker.
Then I came across a post on CCC by Peter Österlund:
How Do You Automatically Tune Your Evaluation Tables
Peter outlines a new way to tune an evaluation function. His approach takes 2.5 million chess positions and minimizes the following fitness function:
E = 1/N * sum(i=1,N, (result[i] – Sigmoid(QuiescientScore(pos[i])))^2)
This is really quite interesting for the following reasons:
- Since we’re not playing complete chess games this runs *much* faster – maybe less than one day of computing time
- The sigmoid function is *really* sensitive in the -100 to +100 centipawn range. This is a critical region where virtually all games are decided. If we can create an evaluation function which accurately evaluated this range then we’re likely to have a strong chess engine
- I suspect Houdini uses a similar technique to calibrate its evaluation function since it’s evaluation is linked to the probability of winning. Robert Houdart mentions this on his website, “Houdini 4 uses calibrated evaluations in which engine scores correlate directly with the win expectancy in the position. A +1.00 pawn advantage gives a 80% chance of winning the game against an equal opponent at blitz time control. At +2.00 the engine will win 95% of the time, and at +3.00 about 99% of the time. If the advantage is +0.50, expect to win nearly 50% of the time“
- Some people have tested this approach to training have reported good results
- When tested the pawn evaluation parameters (resulting from the PBIL optimization) they have varied considerably between the middlegame and endgame. The middlegame value of a pawn is 50 centipawns, while the endgame value is +145 centipawn. If these values are robust and uses in normal play they are likely to produce exciting chess where the engine is happy to sacrifices a pawn for a modest positional advantage. This sounds like the recipe for an interesting engine – which is one of the goals for Maverick!
So I’m in the process of rewriting Maverick evaluation code to accommodate the PBIL algorithm. I’m also writing a PGN parser (in Delphi) so I can train the evaluation function using different sets of training positions.
With all of this re-writing it may be a little while before I can release a new version of Maverick. My aim is still to take Maverick to 2500 ELO using only an improved evaluation function!
I’ll keep you posted!
Over the past month Graham Banks has been running the Division 7 competition. I was delighted when Maverick managed to win with a score of 27.5 out of 44 games. After nine round Maverick languished in the bottom half of the table. It managed to fight back and win! During the tournament I logged onto Graham’s site quite a few time and it was nice to chat with Graham and Erik. There were many nail-biting games – not good for the blood pressure!
Graham then ran a gauntlet competition for Maverick to get enough games for a rating. It managed a respectable rating of 2317 ELO on the CCRL scale. You can see the details here:
Maverick’s CCRL Rating
As I mentioned on a previous post, Maverick doesn’t do so well at slow time controls, so I have happy it came out above 2300 ELO on CCRL.
Many thanks to Graham for taking the time to test Maverick!