Phi-3 Mini SQL Generator β Merged Model
This is the merged (standalone) version for Shizu0n/phi3-mini-sql-generator on top of microsoft/Phi-3-mini-4k-instruct.
The LoRA adapter weights have been merged directly into the base model, making this a standard AutoModelForCausalLM β no PEFT dependency required for inference.
Why two versions?
| Repo | Purpose |
|---|---|
Shizu0n/phi3-mini-sql-generator |
Original QLoRA adapter β documents the training pipeline |
Shizu0n/phi3-mini-sql-generator-merged |
Merged standalone model β used for deployment and inference |
Evaluation β Base vs Fine-tuned
Evaluated on 200 held-out examples from b-mc2/sql-create-context.
| Model | Exact Match |
|---|---|
| Phi-3-mini-4k-instruct (base) | 2.0% |
| This model (fine-tuned) | 73.5% |
Training Details
- Dataset: b-mc2/sql-create-context β 1,000 train / 200 validation examples
- Method: QLoRA (4-bit NF4, LoRA rank 16, alpha 32)
- Hardware: NVIDIA T4 (Google Colab free tier)
- Training time: ~21 min
- Final train loss: 0.6526
Validation
The merge was accepted only after all three smoke tests returned a concrete SQL query:
- PEFT adapter loaded on the base model.
- Local merged directory after
merge_and_unload()andsave_pretrained(). - Downloaded model from this Hugging Face repo with
force_download=True.
Reference smoke output:
SELECT AVG(salary), department FROM employees GROUP BY department
Inference example
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "Shizu0n/phi3-mini-sql-generator-merged"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=False)
model = AutoModelForCausalLM.from_pretrained(
model_id,
dtype=torch.float16,
device_map="auto",
trust_remote_code=False,
attn_implementation="eager",
)
model.config.use_cache = False
prompt = (
"Given the following SQL table, write a SQL query.
"
"Table: employees (id, name, department, salary)
"
"Question: What is the average salary per department?
SQL:"
)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.inference_mode():
outputs = model.generate(
**inputs,
max_new_tokens=80,
do_sample=False,
use_cache=False,
repetition_penalty=1.1,
pad_token_id=tokenizer.eos_token_id,
)
text = tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True)
print(text.strip())
Notes
- Use
trust_remote_code=False; the built-in Transformers Phi-3 implementation avoids stale remote-code failures. - Do not patch
rope_scalingmanually. - Do not mutate
_tied_weights_keysbefore saving.
- Downloads last month
- 123
Model tree for Shizu0n/phi3-mini-sql-generator-merged
Base model
microsoft/Phi-3-mini-4k-instruct