| | --- |
| | license: cc-by-nc-4.0 |
| | library_name: transformers |
| | pipeline_tag: text-generation |
| | tags: |
| | - text-to-sql |
| | - qwen2 |
| | - reinforcement-learning |
| | --- |
| | |
| | # CSC-SQL: Corrective Self-Consistency in Text-to-SQL via Reinforcement Learning |
| |
|
| | This repository contains models and related information for the paper [CSC-SQL: Corrective Self-Consistency in Text-to-SQL via Reinforcement Learning](https://huggingface.co/papers/2505.13271). |
| |
|
| | ## Abstract |
| | Large language models (LLMs) have demonstrated strong capabilities in translating natural language questions about relational databases into SQL queries. In particular, test-time scaling techniques such as Self-Consistency and Self-Correction can enhance SQL generation accuracy by increasing computational effort during inference. However, these methods have notable limitations: Self-Consistency may select suboptimal outputs despite majority votes, while Self-Correction typically addresses only syntactic errors. To leverage the strengths of both approaches, we propose CSC-SQL, a novel method that integrates Self-Consistency and Self-Correction. CSC-SQL selects the two most frequently occurring outputs from parallel sampling and feeds them into a merge revision model for correction. Additionally, we employ the Group Relative Policy Optimization (GRPO) algorithm to fine-tune both the SQL generation and revision models via reinforcement learning, significantly enhancing output quality. Experimental results confirm the effectiveness and generalizability of CSC-SQL. On the BIRD private test set, our 7B model achieves 71.72% execution accuracy, while the 32B model achieves 73.67%. |
| |
|
| | ## Code & Resources |
| | - **GitHub Repository**: [https://github.com/CycloneBoy/csc_sql](https://github.com/CycloneBoy/csc_sql) |
| | - **Hugging Face Collection**: [https://huggingface.co/collections/cycloneboy/csc-sql-6835c4a52da10c54bbe14f8e](https://huggingface.co/collections/cycloneboy/csc-sql-6835c4a52da10c54bbe14f8e) |
| | - **ModelScope Collection**: [https://modelscope.cn/collections/CSC-SQL-8542177708b643](https://modelscope.cn/collections/CSC-SQL-8542177708b643) |
| |
|
| | ## Framework Overview |
| |  |
| |
|
| | ## Main Results |
| | Performance Comparison of different Text-to-SQL methods on BIRD dev and test dataset: |
| |  |
| |
|
| | ## Models and Datasets on Hugging Face |
| | The following models and datasets related to CSC-SQL are available on Hugging Face: |
| |
|
| | | **Model and Dataset** | HuggingFace Link | |
| | |-----------------------|------------------| |
| | | bird train and dev dataset | [🤗 HuggingFace](https://huggingface.co/datasets/cycloneboy/bird_train) | |
| | | CscSQL-Merge-Qwen2.5-Coder-3B-Instruct | [🤗 HuggingFace](https://huggingface.co/cycloneboy/CscSQL-Merge-Qwen2.5-Coder-3B-Instruct) | |
| | | CscSQL-Merge-Qwen2.5-Coder-7B-Instruct | [🤗 HuggingFace](https://huggingface.co/cycloneboy/CscSQL-Merge-Qwen2.5-Coder-7B-Instruct) | |
| | | CscSQL-Grpo-Qwen2.5-Coder-3B-Instruct | [🤗 HuggingFace](https://huggingface.co/cycloneboy/CscSQL-Grpo-Qwen2.5-Coder-3B-Instruct) | |
| | | CscSQL-Grpo-XiYanSQL-QwenCoder-3B-2502 | [🤗 HuggingFace](https://huggingface.co/cycloneboy/CscSQL-Grpo-XiYanSQL-QwenCoder-3B-2502) | |
| | | CscSQL-Grpo-Qwen2.5-Coder-7B-Instruct | [🤗 HuggingFace](https://huggingface.co/cycloneboy/CscSQL-Grpo-Qwen2.5-Coder-7B-Instruct) | |
| | | CscSQL-Grpo-XiYanSQL-QwenCoder-7B-2502 | [🤗 HuggingFace](https://huggingface.co/cycloneboy/CscSQL-Grpo-XiYanSQL-QwenCoder-7B-2502) | |
| |
|
| | ## Usage |
| | This model can be loaded using the `transformers` library. Below is an example of how to use the model for text-to-SQL generation. For more detailed instructions on training and evaluation, please refer to the [official GitHub repository](https://github.com/CycloneBoy/csc_sql). |
| |
|
| | ```python |
| | from transformers import AutoModelForCausalLM, AutoTokenizer |
| | import torch |
| | |
| | # Load the model and tokenizer |
| | model_id = "cycloneboy/CscSQL-Grpo-Qwen2.5-Coder-7B-Instruct" # Example 7B model from the project |
| | tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) |
| | model = AutoModelForCausalLM.from_pretrained( |
| | model_id, |
| | device_map="auto", |
| | torch_dtype="auto", # or torch.bfloat16 if supported |
| | trust_remote_code=True # Required for custom architectures like Qwen2 |
| | ).eval() |
| | |
| | # Prepare your input: natural language question and database schema |
| | question = "What is the average age of students?" |
| | schema_info = """ |
| | CREATE TABLE students ( |
| | student_id INT PRIMARY KEY, |
| | name TEXT, |
| | age INT, |
| | major TEXT |
| | ); |
| | """ # Replace with actual schema from your database |
| | |
| | # Construct the prompt using the Qwen2 chat template format |
| | # The model expects a structured input that includes the schema and question, followed by "SQL:" |
| | formatted_prompt = f"Given the following database schema: |
| | {schema_info} |
| | |
| | Generate a SQL query for the following natural language question: |
| | {question} |
| | SQL:" |
| | |
| | messages = [ |
| | {"role": "user", "content": formatted_prompt} |
| | ] |
| | |
| | # Apply the chat template and tokenize |
| | text = tokenizer.apply_chat_template( |
| | messages, |
| | tokenize=False, |
| | add_generation_prompt=True # Adds '<|im_start|>assistant |
| | ' to prepare for model's response |
| | ) |
| | |
| | inputs = tokenizer(text, return_tensors="pt").to(model.device) |
| | |
| | # Generate the SQL query |
| | generated_ids = model.generate( |
| | **inputs, |
| | max_new_tokens=256, |
| | do_sample=False, # Use greedy decoding for reproducibility |
| | temperature=0.7, |
| | top_p=0.9, |
| | eos_token_id=tokenizer.eos_token_id, |
| | pad_token_id=tokenizer.pad_token_id, |
| | ) |
| | |
| | # Decode and print the generated SQL |
| | # Note: The output may contain the original prompt and special tokens. Post-processing might be needed. |
| | output_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True) |
| | print(output_text) |
| | ``` |
| |
|
| | ## Citation |
| | If you find our work helpful or inspiring, please feel free to cite it: |
| | ```bibtex |
| | @misc{sheng2025cscsqlcorrectiveselfconsistencytexttosql, |
| | title={CSC-SQL: Corrective Self-Consistency in Text-to-SQL via Reinforcement Learning}, |
| | author={Lei Sheng and Shuai-Shuai Xu}, |
| | year={2025}, |
| | eprint={2505.13271}, |
| | archivePrefix={arXiv}, |
| | primaryClass={cs.CL}, |
| | url={https://arxiv.org/abs/2505.13271}, |
| | } |
| | ``` |