Chess Bot Model - TinyPCN
A chess playing neural network trained on expert games from the Lichess Elite Database.
Model Description
This is a policy-value network inspired by AlphaZero, designed to evaluate chess positions and suggest moves.
Architecture
- Input: 18-plane board representation (12 pieces + 6 metadata planes)
- Convolutional backbone: 32 filters, 1 residual block, ~9,611,202 parameters
- Policy head: 4,672-dimensional output (one per legal move encoding)
- Value head: Single tanh output (-1 to +1 for position evaluation)
Training Data
- Dataset: Lichess Elite Database (games from 2200+ ELO players)
- Positions trained: 16,800,000
- Epochs: 10
Performance
- Final Policy Loss: 2.8000
- Final Value Loss: 0.8500
Usage
import torch
import chess
from model import TinyPCN, encode_board
# Load model
model = TinyPCN(board_channels=18, policy_size=4672)
model.load_state_dict(torch.load("chess_model.pth"))
model.eval()
# Evaluate a position
board = chess.Board() # or chess.Board("fen string")
board_tensor = encode_board(board).unsqueeze(0)
with torch.no_grad():
policy_logits, value = model(board_tensor)
# Value interpretation:
# +1.0 = winning for current player
# 0.0 = drawn/equal position
# -1.0 = losing for current player
print(f"Position evaluation: {value.item():.4f}")
Model Files
chess_model.pth- PyTorch model weightsmodel.py- Model architecture and board encodingmcts.py- Monte Carlo Tree Search implementationrequirements.txt- Python dependencies
Limitations
- Trained on expert games only (no self-play yet)
- Lightweight architecture for educational purposes
- May not handle unusual openings or endgames well
Training Details
- Framework: PyTorch
- Optimizer: Adam
- Learning Rate: 0.001
- Batch Size: 256
- Loss Functions: CrossEntropyLoss (policy) + MSELoss (value)
Authors
Created as part of an AlphaZero-style chess engine project.
License
MIT License