Chess Position Evaluation Philosophy – Part 1
I’m now starting to think about the evaluation function. I’m doing this at a completely different stage than for Monarch (the Delphi or “C” version) – but that’s intentional – here’s why.
Monarch was my first chess engine. While I had been a computer chess fan (fanatic?) since the early 80’s, and liked programming, I didn’t write my first engine until 1999. I remember being giddy with excitement when I completed the move generator and the make / unmake routines. I don’t think I tested Monarch as rigorously using the perft methodology. Once I was at this stage my aim was to create a chess playing entity as quickly as possible. I slapped some piece-square tables together and bolted on a vanilla alpha-beta search to see what would happen. To my amazement, even with this rudimentary evaluation function, the first version played “OK” (maybe 1900 ELO).
However, I suspect a poor evaluation routine restricts and hampers improvements to the search. I look at creating a chess program as one big optimization task. As programmers we are doing a manual “hill climb” optimization – changing parameter, adding knowledge, improving the search, while all the time keeping improvements and throwing out changes which don’t work. The number of dimensions and domain space for the optimization is massive. If we are doing a hill climb optimization where you start has some impact as to where you’ll finish. My theory is that if you start with a moderately good evaluation function (i.e. not just piece square tables), you’ll have a better chance to improve it in the long term and avoid local optimums. It’s a hunch – I don’t have any proof that this is the case.
So I’m trying to put all of the components of a strong engine in place from day one. I could be completely wrong but that’s why I’m putting some effort into creating the evaluation function before Maverick has even played one move in a real life chess game.