| | --- |
| | library_name: transformers |
| | license: cc-by-nc-4.0 |
| | pipeline_tag: text-generation |
| | tags: |
| | - text-to-sql |
| | - reinforcement-learning |
| | --- |
| | |
| | # SLM-SQL: An Exploration of Small Language Models for Text-to-SQL |
| |
|
| | ### Important Links |
| |
|
| | 📖[Paper](https://arxiv.org/abs/2507.22478) | 💻[GitHub](https://github.com/CycloneBoy/slm_sql) | 🤗[HuggingFace Collection](https://huggingface.co/collections/cycloneboy/slm-sql-688b02f99f958d7a417658dc) | 🤖[ModelScope Collection](https://modelscope.cn/collections/SLM-SQL-624bb6a60e9643) | |
| |
|
| | ## News |
| |
|
| | + `July 31, 2025`: Upload model to modelscope and huggingface. |
| | + `July 30, 2025`: Publish the paper to arxiv |
| |
|
| | ## Introduction |
| |
|
| | Large language models (LLMs) have demonstrated strong performance in translating natural language questions into SQL queries (Text-to-SQL). In contrast, small language models (SLMs) ranging from 0.5B to 1.5B parameters currently underperform on Text-to-SQL tasks due to their limited logical reasoning capabilities. However, SLMs offer inherent advantages in inference speed and suitability for edge deployment. To explore their potential in Text-to-SQL applications, we leverage recent advancements in post-training techniques. Specifically, we used the open-source SynSQL-2.5M dataset to construct two derived datasets: SynSQL-Think-916K for SQL generation and SynSQL-Merge-Think-310K for SQL merge revision. We then applied supervised fine-tuning and reinforcement learning-based post-training to the SLM, followed by inference using a corrective self-consistency approach. Experimental results validate the effectiveness and generalizability of our method, SLM-SQL. On the BIRD development set, the five evaluated models achieved an average improvement of 31.4 points. Notably, the 0.5B model reached 56.87\% execution accuracy (EX), while the 1.5B model achieved 67.08\% EX. |
| |
|
| | ### Framework |
| |
|
| | <img src="https://raw.githubusercontent.com/CycloneBoy/slm_sql/main/data/image/slmsql_framework.png" height="500" alt="slmsql_framework"> |
| |
|
| | ### Main Results |
| |
|
| | <img src="https://raw.githubusercontent.com/CycloneBoy/slm_sql/main/data/image/slmsql_bird_result.png" height="500" alt="slm_sql_result"> |
| |
|
| |
|
| | <img src="https://raw.githubusercontent.com/CycloneBoy/slm_sql/main/data/image/slmsql_bird_main.png" height="500" alt="slmsql_bird_main"> |
| |
|
| | <img src="https://raw.githubusercontent.com/CycloneBoy/slm_sql/main/data/image/slmsql_spider_main.png" height="500" alt="slmsql_spider_main"> |
| |
|
| | Performance Comparison of different Text-to-SQL methods on BIRD dev and test dataset. |
| |
|
| | <img src="https://raw.githubusercontent.com/CycloneBoy/slm_sql/main/data/image/slmsql_ablation_study.png" height="300" alt="slmsql_ablation_study"> |
| |
|
| | ## Model |
| |
|
| | | **Model** | Base Model | Train Method | Modelscope | HuggingFace | |
| | |------------------------------------------|------------------------------|--------------|---------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------| |
| | | SLM-SQL-Base-0.5B | Qwen2.5-Coder-0.5B-Instruct | SFT | [🤖 Modelscope](https://modelscope.cn/models/cycloneboy/SLM-SQL-Base-0.5B) | [🤗 HuggingFace](https://huggingface.co/cycloneboy/SLM-SQL-Base-0.5B) | |
| | | SLM-SQL-0.5B | Qwen2.5-Coder-0.5B-Instruct | SFT + GRPO | [🤖 Modelscope](https://modelscope.cn/models/cycloneboy/SLM-SQL-0.5B) | [🤗 HuggingFace](https://huggingface.co/cycloneboy/SLM-SQL-0.5B) | |
| | | CscSQL-Merge-Qwen2.5-Coder-0.5B-Instruct | Qwen2.5-Coder-0.5B-Instruct | SFT + GRPO | [🤖 Modelscope](https://modelscope.cn/models/cycloneboy/CscSQL-Merge-Qwen2.5-Coder-0.5B-Instruct) | [🤗 HuggingFace](https://huggingface.co/cycloneboy/CscSQL-Merge-Qwen2.5-Coder-0.5B-Instruct) | |
| | | SLM-SQL-Base-1.5B | Qwen2.5-Coder-1.5B-Instruct | SFT | [🤖 Modelscope](https://modelscope.cn/models/cycloneboy/SLM-SQL-Base-1.5B) | [🤗 HuggingFace](https://huggingface.co/cycloneboy/SLM-SQL-Base-1.5B) |\ |
| | | SLM-SQL-1.5B | Qwen2.5-Coder-1.5B-Instruct | SFT + GRPO | [🤖 Modelscope](https://modelscope.cn/models/cycloneboy/SLM-SQL-1.5B) | [🤗 HuggingFace](https://huggingface.co/cycloneboy/SLM-SQL-1.5B) | |
| | | CscSQL-Merge-Qwen2.5-Coder-1.5B-Instruct | Qwen2.5-Coder-1.5B-Instruct | SFT + GRPO | [🤖 Modelscope](https://modelscope.cn/models/cycloneboy/CscSQL-Merge-Qwen2.5-Coder-1.5B-Instruct) | [🤗 HuggingFace](https://huggingface.co/cycloneboy/CscSQL-Merge-Qwen2.5-Coder-1.5B-Instruct) | |
| | | SLM-SQL-Base-0.6B | Qwen3-0.6B | SFT | [🤖 Modelscope](https://modelscope.cn/models/cycloneboy/SLM-SQL-Base-0.6B) | [🤗 HuggingFace](https://huggingface.co/cycloneboy/SLM-SQL-Base-0.6B) | |
| | | SLM-SQL-0.6B | Qwen3-0.6B | SFT + GRPO | [🤖 Modelscope](https://modelscope.cn/models/cycloneboy/SLM-SQL-0.6B) | [🤗 HuggingFace](https://huggingface.co/cycloneboy/SLM-SQL-0.6B) | |
| | | SLM-SQL-Base-1.3B | deepseek-coder-1.3b-instruct | SFT | [🤖 Modelscope](https://modelscope.cn/models/cycloneboy/SLM-SQL-Base-1.3B ) | [🤗 HuggingFace](https://huggingface.co/cycloneboy/SLM-SQL-Base-1.3B ) | |
| | | SLM-SQL-1.3B | deepseek-coder-1.3b-instruct | SFT + GRPO | [🤖 Modelscope](https://modelscope.cn/models/cycloneboy/SLM-SQL-1.3B ) | [🤗 HuggingFace](https://huggingface.co/cycloneboy/SLM-SQL-1.3B ) | |
| | | SLM-SQL-Base-1B | Llama-3.2-1B-Instruct | SFT | [🤖 Modelscope](https://modelscope.cn/models/cycloneboy/SLM-SQL-Base-1B ) | [🤗 HuggingFace](https://huggingface.co/cycloneboy/SLM-SQL-Base-1B ) | |
| |
|
| | ## Sample Usage |
| |
|
| | This model can be easily loaded and used with the `transformers` library. The following example demonstrates how to perform Text-to-SQL generation. |
| |
|
| | ```python |
| | import torch |
| | from transformers import AutoTokenizer, AutoModelForCausalLM |
| | |
| | model_id = "cycloneboy/SLM-SQL-0.5B" # You can choose any of the models from the table above |
| | tokenizer = AutoTokenizer.from_pretrained(model_id) |
| | model = AutoModelForCausalLM.from_pretrained( |
| | model_id, |
| | torch_dtype=torch.bfloat16, # Use torch.bfloat16 as specified in the model's config |
| | device_map="auto" # Automatically maps the model to available devices (e.g., GPU) |
| | ) |
| | |
| | # Example SQL schema (simplified for demonstration) |
| | schema = """ |
| | CREATE TABLE employees ( |
| | employee_id INT, |
| | first_name VARCHAR, |
| | last_name VARCHAR, |
| | department VARCHAR, |
| | salary INT |
| | ); |
| | """ |
| | |
| | # Natural language query |
| | query = "Show me the first name and last name of employees in the 'Sales' department earning more than 50000." |
| | |
| | # Construct the prompt using the model's chat template format |
| | # The chat template automatically adds system/user tags if available. |
| | messages = [ |
| | {"role": "user", "content": f"Translate the following natural language query into SQL:\ |
| | Schema: {schema}\ |
| | Query: {query}"} |
| | ] |
| | prompt_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
| | |
| | inputs = tokenizer(prompt_text, return_tensors="pt").to(model.device) |
| | |
| | # Generate the SQL query |
| | outputs = model.generate(**inputs, max_new_tokens=256, pad_token_id=tokenizer.eos_token_id) |
| | generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True) |
| | |
| | # Extracting only the generated SQL part (assuming the model responds only with SQL after "### Response:") |
| | # The model's chat template is `### Instruction: |
| | ... |
| | ### Response: |
| | ...<|EOT|>` |
| | # We need to trim the input prompt and the <|EOT|> token. |
| | if "### Response:" in generated_text: |
| | sql_start_index = generated_text.find("### Response:") + len("### Response:") |
| | generated_sql = generated_text[sql_start_index:].strip() |
| | if "<|EOT|>" in generated_sql: |
| | generated_sql = generated_sql.split("<|EOT|>")[0].strip() |
| | else: |
| | generated_sql = generated_text # Fallback if response format is unexpected |
| | |
| | print(generated_sql) |
| | |
| | # Expected output (may vary slightly based on model's exact generation): |
| | # SELECT first_name, last_name FROM employees WHERE department = 'Sales' AND salary > 50000; |
| | ``` |
| |
|
| | ## Dataset |
| |
|
| | | **Dataset** | Modelscope | HuggingFace | |
| | |----------------------------|------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------| |
| | | SynsQL-Think-916k | [🤖 Modelscope](https://modelscope.cn/datasets/cycloneboy/SynsQL-Think-916k) | [🤗 HuggingFace](https://huggingface.co/datasets/cycloneboy/SynsQL-Think-916k) | |
| | | SynsQL-Merge-Think-310k | [🤖 Modelscope](https://modelscope.cn/datasets/cycloneboy/SynsQL-Merge-Think-310k) | [🤗 HuggingFace](https://huggingface.co/datasets/cycloneboy/SynsQL-Merge-Think-310k) | |
| | | bird train and dev dataset | [🤖 Modelscope](https://modelscope.cn/datasets/cycloneboy/bird_train) | [🤗 HuggingFace](https://huggingface.co/datasets/cycloneboy/bird_train) | |
| |
|
| | ## TODO |
| |
|
| | - [ ] Release inference code |
| | - [ ] Upload Model |
| | - [ ] Release training code |
| | - [ ] Fix bug |
| | - [ ] Update doc |
| |
|
| | ## Thanks to the following projects |
| |
|
| | - [csc_sql](https://github.com/CycloneBoy/csc_sql) |
| | - [open-r1](https://github.com/huggingface/open-r1) |
| | - [OmniSQL](https://github.com/RUCKBReasoning/OmniSQL) |
| |
|
| | ## Citation |
| |
|
| | ```bibtex |
| | |
| | @misc{sheng2025slmsqlexplorationsmalllanguage, |
| | title={SLM-SQL: An Exploration of Small Language Models for Text-to-SQL}, |
| | author={Lei Sheng and Shuai-Shuai Xu}, |
| | year={2025}, |
| | eprint={2507.22478}, |
| | archivePrefix={arXiv}, |
| | primaryClass={cs.CL}, |
| | url={https://arxiv.org/abs/2507.22478}, |
| | } |
| | |
| | @misc{sheng2025cscsqlcorrectiveselfconsistencytexttosql, |
| | title={CSC-SQL: Corrective Self-Consistency in Text-to-SQL via Reinforcement Learning}, |
| | author={Lei Sheng and Shuai-Shuai Xu}, |
| | year={2025}, |
| | eprint={2505.13271}, |
| | archivePrefix={arXiv}, |
| | primaryClass={cs.CL}, |
| | url={https://arxiv.org/abs/2505.13271}, |
| | } |
| | ``` |