How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf StellaYoon/data-sql-7b-oracle-postgresql-v2:Q4_K_M# Run inference directly in the terminal:
llama-cli -hf StellaYoon/data-sql-7b-oracle-postgresql-v2:Q4_K_MUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf StellaYoon/data-sql-7b-oracle-postgresql-v2:Q4_K_M# Run inference directly in the terminal:
./llama-cli -hf StellaYoon/data-sql-7b-oracle-postgresql-v2:Q4_K_MBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf StellaYoon/data-sql-7b-oracle-postgresql-v2:Q4_K_M# Run inference directly in the terminal:
./build/bin/llama-cli -hf StellaYoon/data-sql-7b-oracle-postgresql-v2:Q4_K_MUse Docker
docker model run hf.co/StellaYoon/data-sql-7b-oracle-postgresql-v2:Q4_K_MQuick Links
Model Card for data-sql-7b-oracle-postgresql-v2
This model is a fine-tuned version of Chinastark/DatA-SQL-7B. It has been trained using TRL.
Quick start
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="StellaYoon/data-sql-7b-oracle-postgresql-v2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
Training procedure
This model was trained with SFT.
Framework versions
- TRL: 0.26.2
- Transformers: 4.57.3
- Pytorch: 2.9.0+cu126
- Datasets: 4.4.2
- Tokenizers: 0.22.1
Citations
Cite TRL as:
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
- Downloads last month
- 22
Hardware compatibility
Log In to add your hardware
4-bit
5-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for StellaYoon/data-sql-7b-oracle-postgresql-v2
Base model
Chinastark/DatA-SQL-7B
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf StellaYoon/data-sql-7b-oracle-postgresql-v2:Q4_K_M# Run inference directly in the terminal: llama-cli -hf StellaYoon/data-sql-7b-oracle-postgresql-v2:Q4_K_M