FINER-SQL-3B-Spider

Trained from griffith-bigdata/Qwen-2.5-Coder-3B-SQL-Writer using GRPO with two dense rewards from the FINER-SQL paper:

🧠 Memory Reward — aligns reasoning with verified traces ⚙️ Atomic Reward — measures operation-level SQL overlap

85.1% Execution Accuracy on Spider Dev with n=30 candidates, value-aware voting, and a single-sentence GROUP-BY system-prompt addition. Inference runs on a single 12–24 GB GPU.

📄 See other models: https://huggingface.co/collections/griffith-bigdata/finer-sql 📄 GitHub: https://github.com/thanhdath/finer-sql/tree/main


Comparison: FINER-SQL-3B-BIRD vs FINER-SQL-3B-Spider

Both models share the same Qwen-2.5-Coder-3B-SQL-Writer base. They differ only in the GRPO fine-tuning dataset (BIRD train vs Spider train).

Model BIRD Dev (n=30, vav) Spider Dev (n=30, vav, +agg_hint) When to use
FINER-SQL-3B-BIRD 67.54% 83.8% Production BIRD; cross-domain SQL where train is BIRD-like
FINER-SQL-3B-Spider (this model) 63.04% 85.1% Production Spider / spider-style schemas

Why two checkpoints? BIRD and Spider use different SQL annotation conventions (BIRD: verbose, evidence-based, alias-heavy; Spider: terse, aggregate-first GROUP BY, exact ORDER BY direction). A single model trained on either dataset specialises to its annotations and loses ~1–4 pp on the other benchmark. We tried joint training and inference-time prompt tricks; they bridge most of the gap but the last 1–2 pp on each benchmark requires the dataset-specific checkpoint.


Inference

Important: This model expects a small addition to the standard system prompt for best Spider Dev performance:

- When using GROUP BY, list aggregate functions (COUNT, SUM, AVG, MIN, MAX) in the SELECT clause BEFORE the grouping column(s).

Without that line, Spider Official EX drops from 85.1% → 84.2% because Spider gold annotations consistently put aggregates first while the base model defaults to dimension-first ordering.

Quick start (vLLM)

from vllm import LLM, SamplingParams

llm = LLM(
    model="thanhdath/FINER-SQL-3B-Spider",
    dtype="bfloat16",
    max_model_len=4096,
    gpu_memory_utilization=0.85,
)

system_prompt = """You are a meticulous SQL expert. Generate a single, correct SQL query for the user question and the provided database schema.
Follow this exact response format:

Rules:
- Output exactly one SQL statement.
- The SQL must be executable on SQLite.
- Do not include any explanatory text.
- Output one SQL statement only. Do not include any extra text, tags, or code fences.
- When using GROUP BY, list aggregate functions (COUNT, SUM, AVG, MIN, MAX) in the SELECT clause BEFORE the grouping column(s)."""

# Generate n=30 candidates with t=1.0, then majority-vote with VAV
sampling = SamplingParams(n=30, temperature=1.0, max_tokens=2048)

# Use chat-template formatting (see tokenizer_config.json)
messages = [
    {"role": "system", "content": system_prompt},
    {"role": "user", "content": f"Database Schema:\n{schema}\n\nQuestion: {question}"},
]
output = llm.chat(messages, sampling)
candidate_sqls = [c.text.split("</think>")[-1].strip() for c in output[0].outputs]
# Then run majority voting (vav) — see https://github.com/thanhdath/finer-sql for the selector

Recommended evaluation pipeline

  1. Generate n=30 SQL candidates per question with temperature=1.0
  2. Execute each candidate against the database, collect result tuples
  3. Group candidates by execution result; pick the candidate from the largest non-empty success group that does not match the all-zero or empty pattern (value-aware voting, "vav")
  4. Score against gold SQL with the official Spider evaluator (test_suite_sql_eval)

This pipeline gives 85.1% Spider Dev EX. See evaluation/ in the repo for the reference implementation.


Detailed Spider Dev results (this model, n=30, vav, agg_hint)

Hardness Count Execution Accuracy
Easy 248 94.8%
Medium 446 90.1%
Hard 174 78.2%
Extra Hard 166 64.5%
All 1034 85.1%

Recall@30 (any-correct rate among the 30 candidates): 91.3% — the upper bound on what selection strategies can achieve.


Training

Parameter Value
Base model griffith-bigdata/Qwen-2.5-Coder-3B-SQL-Writer
Algorithm GRPO
Train data Spider train (8,659 samples)
Total steps 4000 (this checkpoint = 4000)
Learning rate 8e-6
Num generations per prompt 32
Gradient accumulation 32
Max completion length 2048
Max prompt length 1500
Temperature (rollout) 1.0
Selection during eval vav (value-aware voting)
Rewards Execution + Atomic + Memory + Format
GPU 1× A6000 48 GB

License

Inherits the base model's license (Apache 2.0). Not for medical, legal, or other safety-critical autonomous decision-making.


Citation

@article{finer-sql-2026,
  title  = {FINER-SQL: Fine-grained reasoning rewards for small Text-to-SQL models},
  author = {Thanh Dat and others},
  year   = {2026},
}
Downloads last month
273
Safetensors
Model size
3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for thanhdath/FINER-SQL-3B-Spider

Base model

Qwen/Qwen2.5-3B
Finetuned
(2)
this model
Quantizations
1 model