The-Baron

New Engine – The Baron

I’m delighted to host another new engine at ChessProgramming.net. The Baron recently played at the WCCC in Leiden. It was operated by Richard’s delightful daughter Tessa. I’m not sure of the exact playing strength of The Baron but it’s certainly strong. It’s a full feature engine with SMP support and a comprehensive evaluation function. This version dates back to February 2012.  You can find out more on The Baron’s own page.

Enjoy!

python

Python Chess

I’m delighted to give you this guest post by Niklas Fiekas, the creator of Python Chess. You may think Python Chess is just another chess engine. It isn’t. It’s a library of routines which can manipulate and analyze chess data using Python.  After I learnt about Python Chess I immediately went to Code Academy and took their course on Python. I really see this set of tools becoming a key part in any testing or development frame-work.

Over to Niklas…

Python-chess by example: Automating the Bratko-Kopec Test

Decades ago (1982) Kopec and Bratko published a systematic test for assessing the strength chess-playing programs (pdf). Despite its age it can still be fairly challenging, even for modern chess engines. Of course 24 test positions are not going to be enough for a definite result, especially given that most positions are of a specific type: fairly closed and the solution often involves some kind of sacrifice.

For each of the 24 positions a unique best move is known. The engine is given 120 seconds to solve each position. 1 point is awared for the correct best move.

The positions are given as EPDs. In this article we are going to automate the process of testing an UCI engine, introducing python-chess along the way.

 

Why Python (and python-chess)?

When it comes to chess programming, C++ often is the language of choice. Performance is critical. Not nescessarily so, in this case. All intensive calculations will be done by the engine. We just want to convert the given EPD lines, send them to the engine and handle its response. Python is a very good tool for high-level stuff like this (or making a mini chess computer, or making a website to probe tablebases or creating a cross platform chess GUI). We will use python-chess to deal with the chess rules and the involved formats: EPDs, FENs and the UCI protocol. python-chess can also read and write PGNs, read Polyglot opening books and probe Syzygy tablebases.

Chess positions

FEN: 1k1r4/pp1b1R2/3q2pp/4p3/2B5/4Q3/PPP2B2/2K5 b – – 0 1

BK.01 is one of the easier positions: Black to move and mate in 3

In python-chess a position is represented by a chess.Bitboard(). To create a position from a FEN:

You can then check if a specific move is legal:

Or to get the shorter SAN notation of the move:

Or to make a move and see if it is a check (and much more):

Here we are just going to parse an EPD line to setup a position and get the related info:

Communicating with an UCI engine

Next we will communicate via UCI with a chess engine like Maverick. First start the engine process:

The engine expects an initial uci command. Then it will send information about its identity and options.

Now setup the position:

And calculate for two minutes. The result is the best move (according to the engine) and an optional ponder move (the expected reply the engine wants to ponder on).

Or in fact comparing it with the expected best move from the EPD:

 

Putting it all together

Congratulations Maverick 0.51 x64! On my machine you score 18/24, which is almost on a par with Stockfish 6 64’s 19/24.

 

Bonus

The original Brato-Kopec test has one more rule: Sometimes the engine changes its mind. If it had the solution at the 30, 60 or 90 second mark but then discarded it, 1/4, 1/3 or 1/2 points are awarded. In order to do this we need to look at the engines principal variations while it is thinking. This is included in the full code of this example.

You can find out more about Python Chess at https://github.com/niklasf/python-chess.

Maverick-1.0

Maverick 1.0 Released

I’m pleased to be able to release Maverick 1.0. This version adds considerable selectivity to the search. It has a basic implementation of Late Move Reduction and Beta pruning. In my tests it is about 2500 ELO on the CCRL scale, so it’s time to give it the 1.0 version number.

A full list of changes are:

  • Added Late Move Reduction
  • Added beta pruning (inspired by Senpai)
  • Added unstoppable pawn code
  • Added king and pawn endgame knowledge
  • Added knight outpost bonus
  • Added refutation moves to the move ordering

There are also ARM and Linux version included for the first time (thanks Jim Ablett)

I’m pleased with the style of play. Maverick aggression coupled with its positional naivety makes for interesting play! If you play against chess engines I’d be interested in any feedback.

You can download Maverick 1.0 from the Download Page

Maverick-0.60

Maverick 0.60 Released

Today I’m releasing Maverick 0.60. The main changes are as follows:

  • Added support for Chess960
  • Added basic king safety (this makes the playing style much more attractive)
  • Fixed a problem with using the opening book with Arena
  • Fixed an obscure bug which could crash the engine after a stop command
  • Transferred source code to Github (https://github.com/stevemaughan/maverick.git)
  • Made a bazillion tweaks which may, or may not, help!

In self play matches against 0.51 it shows an improvement of about 50 ELO. I’m hoping this will translate to a real-world strength increase of at least 30 ELO.

I’m now about to start working on improving the selective search!

You can download it on the download page

visual-studio-150x150

Free Version of Visual Studio Professional 2013

Last week Microsoft release a Community Edition of Visual Studio 2013.  This is a free version of the Professional edition of Visual Studio 2013. Previously Microsoft’s free edition was Visual Studio Express. This only compiled to 32 bit and didn’t include a Profiler or Profiler-Guided-Optimization.  The new Community Edition includes all of these goodies and can generate 64 bit executables.  This is a big deal for chess programmers. It means you can easily develop and test your engine from within the same high quality development environment (I know we’ve always had GCC but this is more integrated than the mishmash of open source tools).

There are some constraints on who can use the Community Edition. You can only use it if your engine isn’t commercial, or if it brings in less than $1 million per year – I’m sure that covers every chess engine developer!!

You can find out more and download here.

The Holiday season is coming. I’m hoping to have more time to dedicate to chess programming. For the last six months I’ve been swamped with work – but I think there is a light at the end of the tunnel. More updates to come.

senpai

Two New Engines Hosted on ChessProgramming.net

I’m excited to let everyone know about two new engines which are to be hosted on this blog.

The first engine is Fruit Reloaded. This is fork of Fabien Letouzey’s Fruit 2.1.  Most of the new development (including SMP search) has been done by Daniel Mehrmann and Ryan Benitez.  You can find out more here:

http://www.chessprogramming.net/fruit-reloaded/

The second, engine is a big surprise. Fabien himself has been dabbling once again in chess programming. He’s come up with a brand new engine – Senpai!  It’s a bitboard engine (Fruit was array based) with SMP search. I ran some quick tests on a beta version and Senpai 1.0 drew a 150 game match against Chiron 2.0. Although this is a small sample of only one engine, it implies a rating of approximately 3100 ELO on the CCRL scale.  You can find out more about Senpai here:

http://www.chessprogramming.net/senpai/

Let the fun begin!

600px-Logistic-curve.svg_-150x150

Automatic Tuning of Evaluation Function

When it comes to Maverick’s evaluation function I’m frustrated and excited in equal measure!

My focus over the last couple of months has been to improve Maverick’s evaluation function.  However I’ve found it quite difficult to improve on Maverick 0.51’s actual playing strength.  Adding extra knowledge (e.g backward pawns) seems to add little in terms of playing strength. I’m struck by the massive number of ways to encode chess knowledge.  I feel like a blind archer as I make wild guesses trying to calibrate the parameters.

There must be a better way!

Early on in Maverick’s development I came across Thomas Petzke‘s approach to tuning evaluation function. He uses a form of genetic algorithm (PBIL) to tune the parameters.  PBIL optimization algorithms are really neat – they represents each number in binary format. Each bit of each number is represented as a floating point value between zero and one. As the system “learns” the floating point values are “trained” and gravitate to either zero or one based on the training data. In Thomas’ case he played a few games of chess to assess the fitness of a set of parameters. This is expensive – but ultimately game-playing-ability is the attribute we’d like to optimize.  Sp maybe the training time is justified.

Back in 2000 I worked a lot with standard genetic algorithms. I used them to evaluate marketing campaigns. I think PBIL may be even better for evaluating marketing campaigns (but that’s a story for another day).  I’m certainly interested in using them to tune Maverick’s evaluation function. The only problem is Thomas’ method take ages to complete (weeks!). I’d prefer a method which is quicker.

Then I came across a post on CCC by Peter Österlund:

How Do You Automatically Tune Your Evaluation Tables

Peter outlines a new way to tune an evaluation function. His approach takes 2.5 million chess positions and minimizes the following fitness function:

E = 1/N * sum(i=1,N, (result[i] – Sigmoid(QuiescientScore(pos[i])))^2)

This is really quite interesting for the following reasons:

  • Since we’re not playing complete chess games this runs *much* faster – maybe less than one day of computing time
  • The sigmoid function is *really* sensitive in the -100 to +100 centipawn range. This is a critical region where virtually all games are decided. If we can create an evaluation function which accurately evaluated this range then we’re likely to have a strong chess engine
  • I suspect Houdini uses a similar technique to calibrate its evaluation function since it’s evaluation is linked to the probability of winning. Robert Houdart mentions this on his website, “Houdini 4 uses calibrated evaluations in which engine scores correlate directly with the win expectancy in the position. A +1.00 pawn advantage gives a 80% chance of winning the game against an equal opponent at blitz time control. At +2.00 the engine will win 95% of the time, and at +3.00 about 99% of the time. If the advantage is +0.50, expect to win nearly 50% of the time
  • Some people have tested this approach to training have reported good results
  • When tested the pawn evaluation parameters (resulting from the PBIL optimization) they have varied considerably between the middlegame and endgame. The middlegame value of a pawn is 50 centipawns, while the endgame value is +145 centipawn. If these values are robust and uses in normal play they are likely to produce exciting chess where the engine is happy to sacrifices a pawn for a modest positional advantage. This sounds like the recipe for an interesting engine – which is one of the goals for Maverick!

So I’m in the process of rewriting Maverick evaluation code to accommodate the PBIL algorithm. I’m also writing a PGN parser (in Delphi) so I can train the evaluation function using different sets of training positions.

With all of this re-writing it may be a little while before I can release a new version of Maverick. My aim is still to take Maverick to 2500 ELO using only an improved evaluation function!

I’ll keep you posted!

ELO

Maverick Wins CCRL Division 7 (and Gets a CCRL Rating)

Over the past month Graham Banks has been running the Division 7 competition. I was delighted when Maverick managed to win with a score of 27.5 out of 44 games. After nine round Maverick languished in the bottom half of the table. It managed to fight back and win! During the tournament I logged onto Graham’s site quite a few time and it was nice to chat with Graham and Erik. There were many nail-biting games – not good for the blood pressure!

Graham then ran a gauntlet competition for Maverick to get enough games for a rating. It managed a respectable rating of 2317 ELO on the CCRL scale. You can see the details here:

Maverick’s CCRL Rating

As I mentioned on a previous post, Maverick doesn’t do so well at slow time controls, so I have happy it came out above 2300 ELO on CCRL.

Many thanks to Graham for taking the time to test Maverick!