thanhdath's picture
Add bundle README with download instructions and provenance
6432a90 verified
metadata
license: cc-by-sa-4.0
language:
  - en
tags:
  - text-to-sql
  - bird
  - spider
  - finer-sql
  - training-data
size_categories:
  - 10K<n<100K
configs:
  - config_name: default
    data_files:
      - split: train
        path: bird_train_no_gen_table.tar.gz

FINER-SQL — Training Resources Bundle

Convenience bundle of all the data assets needed to train and evaluate FINER-SQL on BIRD-bench. Companion to the thanhdath/FINER-SQL-3B-BIRD and thanhdath/FINER-SQL-3B-BIRD-no-gen model cards.

⚠️ The training pipeline — single-GPU continual GRPO from FINER-SQL-3B-BIRD to a no-gen specialist — is documented in TRAIN_3B_BIRD_NO_GEN.md. This dataset gives you everything in §4 of that guide in one place.

Files

File Size (compressed) Size (extracted) What is it
bird_dev.tar.gz ~1.0 GB ~3.5 GB BIRD dev release: dev_databases/, dev_gold.sql, dev.json. Required by the official BIRD evaluator (evaluation_bird_ex.py) and by the SQL execution sandbox.
bird_train.tar.gz ~10 GB ~40 GB BIRD train databases (train_databases/). Required for GRPO reward — the trainer executes both candidate and gold SQLs against these SQLites.
bird_train_no_gen_table.tar.gz 3.4 MB 60 MB HuggingFace Dataset arrow file with 9 428 BIRD train prompts in vanilla / no-gen-table format (top-30 GRAST columns + raw schema, no LLM-generated meanings). The training set used for the no-gen specialist.
gt_rows_cache.pkl.gz 17 MB 76 MB Pickled {(dataset, db_id, gold_sql): rows} cache of executed gold SQLs for both BIRD train and dev. Speeds up the first 1–2 epochs of GRPO reward computation by 5–10× (no need to re-execute every gold).

Quick download (everything)

# Bulk download
huggingface-cli download thanhdath/finer-sql-training-bundle \
    --repo-type dataset \
    --local-dir ~/finer-sql-data --local-dir-use-symlinks False

# Layout it into the paths the training scripts expect
cd ~/finer-sql-data
mkdir -p ~/data/bird ~/data/grast-sql-data/data-train

tar xf bird_dev.tar.gz                    -C ~/data/bird/             # → dev/dev_databases, dev_gold.sql, dev.json
mkdir -p ~/data/bird/dev && mv ~/data/bird/dev_* ~/data/bird/dev/ 2>/dev/null || true
mkdir -p ~/data/bird/train && tar xf bird_train.tar.gz -C ~/data/bird/train/
tar xf bird_train_no_gen_table.tar.gz     -C ~/data/grast-sql-data/data-train/

gunzip -c gt_rows_cache.pkl.gz > ~/data/gt_rows_cache.pkl

After this, the canonical paths used by train_bird_no_gen_table_v2.sh, eval_final_3b_bird.sh, and reproduce.py are populated:

~/data/bird/dev/dev_databases/             ← BIRD_DB_ROOT
~/data/bird/dev/dev_gold.sql                ← BIRD_GOLD
~/data/bird/dev/dev.json                    ← BIRD_DIFF
~/data/bird/train/train_databases/          ← used by db_execution/api.py
~/data/grast-sql-data/data-train/grpo_sql_writer_bird_train_no_gen_table/
~/data/gt_rows_cache.pkl

Selective download (just what you need)

from huggingface_hub import hf_hub_download

# Only the no-gen training arrow (60 MB extracted) — for re-running GRPO
hf_hub_download("thanhdath/finer-sql-training-bundle",
                "bird_train_no_gen_table.tar.gz", repo_type="dataset",
                local_dir="~/finer-sql-data")

# Only the GT cache (76 MB extracted) — speeds up reward calc
hf_hub_download("thanhdath/finer-sql-training-bundle",
                "gt_rows_cache.pkl.gz", repo_type="dataset",
                local_dir="~/finer-sql-data")

# Only the BIRD dev (3.5 GB extracted) — for evaluation
hf_hub_download("thanhdath/finer-sql-training-bundle",
                "bird_dev.tar.gz", repo_type="dataset",
                local_dir="~/finer-sql-data")

Provenance

  • bird_dev.tar.gz and bird_train.tar.gz are repackaged from the public BIRD-bench dev/train releases. The archives are byte-identical to extracting the upstream zips. Original license applies.
  • bird_train_no_gen_table.tar.gz is generated by the GRAST-SQL schema-linker pipeline on top of the BIRD train split. The messages column renders the chat template; groundtruth_sqls carries the (multiple) acceptable golds per question.
  • gt_rows_cache.pkl.gz is built from BIRD train + dev gold SQLs by build_gt_cache.py (no human labour beyond the upstream gold SQLs).

Reproducing FINER-SQL with this bundle

git clone https://github.com/thanhdath/finer-sql.git && cd finer-sql

export BIRD_DB_ROOT=~/data/bird/dev/dev_databases/
export BIRD_GOLD=~/data/bird/dev/dev_gold.sql
export BIRD_DIFF=~/data/bird/dev/dev.json

# Stand up the SQL executor sandbox (point it at ~/data/bird/{train,dev})
cd db_execution && uvicorn api:app --host 0.0.0.0 --port 8001 --workers 8 &
cd ..

# Continual GRPO from the joint BIRD+Spider checkpoint → no-gen specialist
bash train_bird_no_gen_table_v2.sh

# Evaluate every saved checkpoint
for s in 20 40 60 80 100; do
    bash eval_final_3b_bird.sh \
        output/grpo_bird_3b_no_gen_table_v2/checkpoint-$s \
        ~/data/grast-sql-data/data-train/.../bird_dev_top30_prompts_v2_no_gen_table \
        no_gen_step_$s 0
done

Citation

@article{finer-sql-2026,
  title  = {FINER-SQL: Fine-grained reasoning rewards for small Text-to-SQL models},
  author = {Thanh Dat and others},
  year   = {2026},
}

BIRD-bench:

@inproceedings{li2023bird,
  title  = {{Can LLM Already Serve as a Database Interface? A {BIG} Bench for Large-Scale Database Grounded Text-to-SQLs}},
  author = {Li, Jinyang and Hui, Binyuan and Qu, Ge and Yang, Jiaxi and Li, Binhua and Li, Bowen and Wang, Bailin and Qin, Bowen and Cao, Ruiying and others},
  booktitle = {NeurIPS},
  year   = {2023}
}