Please sign in to access this page

PZChessBot

PZChessBot

46 devlogs
97h 53m
•  Ship certified
Created by Kevin

IMPORTANT: Reload the page if you get any error like "engine did not load"

World top 100 chess engine, shooting for higher!

Approximate CCRL rating: 3450

Also check out the lichess account: https://lichess.org/@/PZChessBot

PZChessBot is UCI compliant and also human compliant! When downloading binaries, run `help` to get a list of commands.

This is a project I started a long time ago, but only recently picked back up. Starting from scratch, it quickly reached 2000 Elo in under a month. Then, I trained a neural network to do the positional evaluations to raise it to 3000 Elo. From then on, small tweaks and re-trains of the neural network have gradually piled up, placing it at a current rating of approximately 3400 Elo.

Timeline

Earned sticker
  • Don't allow NMP in singular search (10 Elo)
  • Compile with LTO (15 Elo)
  • Add node time management (24 Elo)
  • Lower history values each search (8 Elo)
  • Fix node counting (5 Elo)
  • Do not store best move on TT fail-low (4 Elo)
  • Fix checkmate (1 Elo)
  • Disallow IIR at expected all-nodes (12 Elo)
  • Raise singular extension beta (8 Elo)
  • Disallow corrhist update on mate scores (5 Elo)
  • Relax SE bounds (5 Elo)
Update attachment
Kevin
Kevin
3h 35m 6 days ago
  • Adjust soft bound based on search complexity (9 Elo)
  • Only do NMP at sufficient depth (6 Elo)
  • Prefetch TT values before recursing (7 Elo)
  • Reduce killer moves less (16 Elo)
  • Increase overall LMR (7 Elo STC, 9 Elo LTC)
  • QS Prior Extensions (5 Elo)
  • NMP Improving (4 Elo)
Update attachment
Kevin
Kevin
1h 40m 14 days ago

I actually might just be the goat, this is the largest green wave we've seen in a while
- Use optimized move picking in qsearch instead of std::stable_sort (which is slow) (4.5 Elo)
- Optimize capture generation for qsearch (6 Elo)
- Optimize NNUE inference with Finny-like tables (15 Elo)
- Only store upper 32 bits of hash (5.7 Elo)
- Fail-firm beta adjustment when failing high (16.6 Elo)
- Simplify Finny NNUE inference (1 Elo)
- Modify time allocation (8.3 Elo)

Update attachment

Ship 4

1 payout of shell 363.0 shells

Kevin

18 days ago

Kevin Covers 11 devlogs and 20h 6m
  • Disallow LMR at low depths (7 Elo)
  • Disallow LMR for early moves (5 Elo)
Update attachment
  • Allow razoring up to depth 8 (+2 Elo)
  • Do more NMP if score is far above beta; depth cap RFP (+5 Elo)
  • Add nonpawn corrhist (+8 Elo)
Update attachment
Kevin
Kevin
2h 36m 22 days ago
  • Re-train NNUE again
  • Allow adjustments to NMP margin based on current depth (+10 Elo)
Update attachment
  • Generate another 600M positions for the neural network
  • Cutnode reductions (4.8 +/- 3.5 Elo)
  • Dynamic PVS SEE margin (5.2 +/- 3.8 Elo)
  • PrevMove CorrHist (6.7 +/- 4.4 Elo)
  • TT-corrected QS evals (8.9 +/- 5.2 Elo)
Update attachment
Kevin
Kevin
2h 34m 27 days ago
  • Create a cool little visualization thing for the NN weights
Update attachment
Kevin
Kevin
2h 31m 28 days ago
  • Do some retrains of the main evaluation NNUE
  • Continue with self-gen, gain another +20 Elo from WDL updates (0.15 -> 0.40)
Update attachment
  • Continue tweaking the new method (normalization helped it gain @ LTC but hindered its performance @ STC), now only gaining +3 Elo
  • Not sure if I want to merge this into PZ ... but we'll see
Update attachment

Try a completely new pruning method (cut net) that gained +15 Elo (WTF!!!!!)
Might PR this idea into Stockfish and see if it can also gain from this

Update attachment
nosrep nosrep about 1 month ago
inovashun
  • Test the new neural network against v3.0 (+16.0 @ STC, -4.7 @ LTC)
  • Test the new neural network against generation 12 of the self-trained neural network (+16.5 @ STC, +30.7 @ LTC)
Update attachment
  • Completely overhaul the PVSearch mechanism in PZChessBot
  • Fix PV handling (for the most part; still some weird edge cases)
  • Modify training schedule and prepare to train next generation of NNUE w/ 2.6 billion positions (78GB of data)
Update attachment

Continuing on with general QoL fixes in PZChessBot since the search is pretty much complete :)
- Add MultiPV support
- Test 3-PV vs 1-PV @ fixed depth=10 (+57 Elo)

Update attachment

Ship 3

1 payout of shell 151.0 shells

Kevin

about 1 month ago

Kevin Covers 10 devlogs and 8h 3m
  • Try adding cutnode reductions (to no avail, again)
  • Test disallowing NMP in PV nodes
  • I think my nodetype handling is kinda cooked
Update attachment
  • Add a web-playable version, compile with WASM
Update attachment
nosrep nosrep about 1 month ago
peak
  • Continue self-generated NNUE
  • Change to using 5 buckets and also shuffle data (+30 Elo)
Update attachment
  • Fix the TT issue that has been cooked for a while (+11 Elo) Basically, I was storing everything into the TTable as an UPPER_BOUND (which is obviously wrong), but fixing this bug would always lose Elo. The problem was that I was doing TT cutoffs in PV nodes, which is expressly disallowed; but only recently discovered this problem...
Update attachment
  • Test out a bunch of tweaks, including:
  • Lowering RFP threshold to 100 (+2.8 Elo)
  • Center control evaluation term (+10.2 Elo)
  • King safety evaluation term (+11.3 Elo) Around 2500 Elo now for the HCE version of the engine!
Update attachment
  • Also use PeSTO values for material, and switch the tapering system to use PeSTO's system (+148 Elo)
Update attachment
  • Tweak the eval function to prioritize rooks on 7th (+15 Elo)
  • Use PeSTO PSQTs (+120 Elo)
Update attachment
  • Add bishop pair bonus (+9 Elo)
  • Taper the PSQTs for endgames for pawns and kings (+79 Elo)
  • Test adding mobility bonus but my code is too slow for it to gain Elo... maybe in the future
Update attachment
  • Begin working on the hand-crafted evaluation part of the engine
  • Restarting from scratch (with just material counting), added PSQT (+270 Elo)
Update attachment
Kevin Kevin about 2 months ago
@nosrep my psqts are completely from scratch lol, not tuned at all
nosrep nosrep about 2 months ago
which psts are you using? or did you make them from scratch (handwritten (??)/texel)
  • Touch up the data generation script to skip heavily biased starting positions
  • Generate another 500M chess positions for training the network!
  • Also did a small change where instead of only deepening the search for the hash move, singular extensions actually raised the depth of the entire node (+3 Elo)
Update attachment

Ship 2

1 payout of shell 381.0 shells

Kevin

about 2 months ago

Kevin Covers 7 devlogs and 26h 12m

Added pretty console output! It looks really great IMO

  • Add QSearch Futility Pruning (+7 Elo)
  • Reduce less on PV nodes (+5 Elo)
  • Reduce more when the TT Move is a capture (+6 Elo)
  • Fix killer move logic (+18 Elo)
  • Train a new horizontally mirrored network (+28 Elo)
  • Add PVS SEE pruning (+24 Elo)
Update attachment
  • Rewrite move ordering scheme (+5 Elo)
  • Implement LMP (+8 Elo)
  • Implement singular extensions (+5 Elo)
  • Implement negative extensions (+13 Elo)
  • Implement history pruning (+21 Elo)
  • Do more RFP when improving (+18 Elo)
Update attachment
  • Run an SPSA (+5 Elo)
  • Do not reset Transposition Table on position command (+65 Elo!!!!!)
Update attachment
  • Only do NMP when eval is above beta (+12 Elo)
  • Only call TT once per node (+12 Elo)
  • Add razoring (+31 Elo)
Update attachment
  • Used TT eval instead of static eval when available (+10 ELO)
  • Use lazy sorting (+2 ELO)
Update attachment
  • Test various tweaks, including:
  • always replace scheme for TTable
  • lazy sorting in main
  • doing NWS first in PVS, even after score > alpha condition is met
  • disallowing RFP in mating positions (for more accurate mating scores)

none of these passed :(

Update attachment

Ship 1

1 payout of shell 585.0 shells

Kevin

3 months ago

Kevin Covers 15 devlogs and 33h 26m

Continuing rewrite:
- Implemented check extensions (50 ELO)
- Implemented proper time management (30 ELO)

Testing:
- Futility Pruning
- QSearch Futility Pruning
- Capture History Heuristic

Update attachment
  • Continuing rewrite
  • Implemented quiescence, transposition tables, aspiration windows, PVSearch, RFP, NMP, SEE, and FP
  • We're already back to 3200 ELO!

To answer @nosrep, I'm rewriting the search because I wrote the bulk of the foundation for the search back when I didn't really know much about chess engines. As a result, my search code is, honestly, extremely messy and difficult to deal with. So, I'm trying to clean it up (along with fixing a few bugs and adding some small microoptimizations)! Also, when I wrote the base search, I didn't know how to properly test code either, so I just made changes and hoped they made the engine stronger. Now, by testing all my changes, I can be sure that my engine is actually stronger, and also I can document my progress and write blog posts on it!

Update attachment
nosrep nosrep 3 months ago

preach

  • Begin working on rewrite of search
  • Implemented a/b pruning, basic move ordering (MVV-LVA, Killer Moves, HH)
  • Also documenting progress on int0x80.ca (other project)
Update attachment
nosrep nosrep 3 months ago

why rewrite?

  • Add static exchange evaluation in quiescence search (~20 ELO) ~3300 ELO now
Update attachment
  • Messed around with trying to add Horizontal Mirroring to optimize my NNUE inference but didn't work (will be revisited soon)
  • Flesh out data generation workload, tool is fully working now (~3.5kpos/s)
  • Test adding Capture History to QSearch
  • Test using a model trained on 0.05WDL instead of 0.15WDL
Update attachment

Start working on self-gen data for self-training network
Tried a variety of small improvements that didn't work :(

Update attachment
  • Finally got CorrHist working (+7 ELO)
  • Still waiting on other tests that will take eons
Update attachment
  • Figured out why my pawn corrhist wasn't working
  • Waiting on SPRT results (again)
  • Broke the bank to rent a 48CPU Google server to speed up tests
  • Please help my sanity
Update attachment
  • Various small speed optimizations (~4 ELO)
  • Try adding Pawn CorrHist (but it's failing miserably) We're like 3250 ELO right now!
Update attachment
  • Completely overhaul the NNUE evaluation system
  • Change network to use king input buckets (~54 ELO) Insane change for an insane amount of ELO
Update attachment
  • Add extended futility pruning along with another SPSA tune (~11 ELO)
  • Testing adding internal iterative deepening, seems to not be doing too well (-0.95 +/- 4.3 ELO)
  • Testing increasing the NMP R-value from 4 to 5, awaiting results right now
Update attachment
  • Change history heuristic to use history gravity formula (~16 ELO)
  • Add capture history heuristic to make MVV-LVA -> MVV+CaptHist (~14 ELO)
  • Allow RFP to be performed at all depths (~5 ELO) Engine is now around 3160 ELO, putting it around 140th in global standings
Update attachment
  • Added futility pruning (~20 ELO)
  • Vectorize nnue_eval() (~7 ELO)
  • Run a full SPSA tune of values and use the tuned values (~20 ELO) Didn't have time to run a progtest but it's looking good!
Update attachment
  • Improved aspiration window logic to gradually widen instead of defaulting to a full-window search (~20 ELO)
  • Improved time management to include a soft bound (~20 ELO) Engine should be around 3120-ish ELO now, putting it around 160th in global rankings (further testing needed)
Update attachment

Finalized some stuff for the v2.0 release! Not too sure if I will continue work (since most of it will be with NNUE stuff and that's really :( and also because engine dev is getting really dry)

Update attachment