File size: 1,297 Bytes
fb72237
e70014e
 
 
 
d00e565
e70014e
d00e565
9c1a303
 
 
 
 
 
44049cc
 
 
 
 
 
 
 
 
 
 
131006e
 
 
 
44049cc
131006e
44049cc
 
 
1289ac1
44049cc
 
1289ac1
44049cc
 
fb72237
 
1289ac1
fb72237
 
1289ac1
 
e99a002
 
 
131006e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
configs:
- config_name: default
  data_files:
  - split: train
    path: train/*.pgn.zst
  - split: test
    path: test/*.pgn.zst
license: mit
tags:
- code
size_categories:
- 100M<n<1B
pretty_name: ChessBot-Dataset
dataset_info:
  features:
  - name: state
    dtype:
      array2_d:
        shape:
        - 8
        - 8
        dtype: int8
  - name: action
    dtype: int16
  - name: best_action
    dtype: int16
  - name: legal_actions
    sequence: int16
  - name: result
    dtype: float32
  splits:
  - name: train
    num_bytes: 65545518162
    num_examples: -1
  - name: test
    num_bytes: 7298385021
    num_examples: -1
  download_size: 7621976329
  dataset_size: 72843903183
---

This repository contains a gigantic curated chess dataset meant for machine learning. Currently the dataset contains approximately **2 billion positions** in **PGN format**. Huge credits to the following main sources:

- Lumbra's Database (filtered 2600+)
- Lichess Puzzle Database 
- Computer Chess: TCEC Database, CCRL, LC0 training data (test80)

The supporting dataset code and training platform, known as ChessBot-Battleground, is found at: https://github.com/KeithG33/ChessBot-Battleground/

*Note*: The dataset uses a custom loading script (`ChessBot-Dataset.py`) to parse PGN data on the fly.