Instructions to use seniruk/qwen2.5coder-0.5B_commit_msg with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use seniruk/qwen2.5coder-0.5B_commit_msg with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="seniruk/qwen2.5coder-0.5B_commit_msg", filename="qwen0.5-finetuned.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use seniruk/qwen2.5coder-0.5B_commit_msg with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf seniruk/qwen2.5coder-0.5B_commit_msg # Run inference directly in the terminal: llama-cli -hf seniruk/qwen2.5coder-0.5B_commit_msg
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf seniruk/qwen2.5coder-0.5B_commit_msg # Run inference directly in the terminal: llama-cli -hf seniruk/qwen2.5coder-0.5B_commit_msg
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf seniruk/qwen2.5coder-0.5B_commit_msg # Run inference directly in the terminal: ./llama-cli -hf seniruk/qwen2.5coder-0.5B_commit_msg
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf seniruk/qwen2.5coder-0.5B_commit_msg # Run inference directly in the terminal: ./build/bin/llama-cli -hf seniruk/qwen2.5coder-0.5B_commit_msg
Use Docker
docker model run hf.co/seniruk/qwen2.5coder-0.5B_commit_msg
- LM Studio
- Jan
- vLLM
How to use seniruk/qwen2.5coder-0.5B_commit_msg with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "seniruk/qwen2.5coder-0.5B_commit_msg" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "seniruk/qwen2.5coder-0.5B_commit_msg", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/seniruk/qwen2.5coder-0.5B_commit_msg
- Ollama
How to use seniruk/qwen2.5coder-0.5B_commit_msg with Ollama:
ollama run hf.co/seniruk/qwen2.5coder-0.5B_commit_msg
- Unsloth Studio new
How to use seniruk/qwen2.5coder-0.5B_commit_msg with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for seniruk/qwen2.5coder-0.5B_commit_msg to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for seniruk/qwen2.5coder-0.5B_commit_msg to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for seniruk/qwen2.5coder-0.5B_commit_msg to start chatting
- Pi new
How to use seniruk/qwen2.5coder-0.5B_commit_msg with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf seniruk/qwen2.5coder-0.5B_commit_msg
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "seniruk/qwen2.5coder-0.5B_commit_msg" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use seniruk/qwen2.5coder-0.5B_commit_msg with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf seniruk/qwen2.5coder-0.5B_commit_msg
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default seniruk/qwen2.5coder-0.5B_commit_msg
Run Hermes
hermes
- Docker Model Runner
How to use seniruk/qwen2.5coder-0.5B_commit_msg with Docker Model Runner:
docker model run hf.co/seniruk/qwen2.5coder-0.5B_commit_msg
- Lemonade
How to use seniruk/qwen2.5coder-0.5B_commit_msg with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull seniruk/qwen2.5coder-0.5B_commit_msg
Run and chat with the model
lemonade run user.qwen2.5coder-0.5B_commit_msg-{{QUANT_TAG}}List all available models
lemonade list
Configure the model in Pi
# Install Pi:
npm install -g @mariozechner/pi-coding-agent# Add to ~/.pi/agent/models.json:
{
"providers": {
"llama-cpp": {
"baseUrl": "http://localhost:8080/v1",
"api": "openai-completions",
"apiKey": "none",
"models": [
{
"id": "seniruk/qwen2.5coder-0.5B_commit_msg"
}
]
}
}
}Run Pi
# Start Pi in your project directory:
pi- Hi, I’m Seniru Epasinghe 👋
- 🌐 Connect with me
- Finetuned-qwen2.5-coder-0.5B model on 100000 rows of a cutom dataset containing. git-differences and respective commit messages
- Each row of the dataset was formatted as below to suit finetuning requirement of Qwen2.5-coder model so we have to use the same prompt for better results
- Code for inference of the gguf model is given below
- 🌐 Connect with me
Hi, I’m Seniru Epasinghe 👋
I’m an AI undergraduate and an AI enthusiast, working on machine learning projects and open-source contributions.
I enjoy exploring AI pipelines, natural language processing, and building tools that make development easier.
🌐 Connect with me
Finetuned-qwen2.5-coder-0.5B model on 100000 rows of a cutom dataset containing. git-differences and respective commit messages
Each row of the dataset was formatted as below to suit finetuning requirement of Qwen2.5-coder model so we have to use the same prompt for better results
"""Generate a concise and meaningful commit message based on the provided Git diff.
### Git Diff:
{Git diff from dataset}
### Commit Message:"""
Code for inference of the gguf model is given below
from llama_cpp import Llama
modelGGUF = Llama.from_pretrained(
repo_id="seniruk/qwen2.5coder-0.5B_commit_msg",
filename="qwen0.5-finetuned.gguf",
rope_scaling={"type": "linear", "factor": 2.0},
chat_format=None, # Disables any chat formatting
n_ctx=32768, # Set the context size explicitly
)
# Define the commit message prompt (Minimal format, avoids assistant behavior)
commit_prompt = """Generate a meaningful commit message explaining all the changes in the provided Git diff.
### Git Diff:
{}
### Commit Message:""" # Removed {} after "Commit Message:" to prevent pre-filled text.
# Git diff example for commit message generation
git_diff_example = """
diff --git a/index.html b/index.html
index 89abcde..f123456 100644
--- a/index.html
+++ b/index.html
@@ -5,16 +5,6 @@ <body>
<h1>Welcome to My Page</h1>
- <table border="1">
- <tr>
- <th>Name</th>
- <th>Age</th>
- </tr>
- <tr>
- <td>John Doe</td>
- <td>30</td>
- </tr>
- </table>
+ <p>This is a newly added paragraph replacing the table.</p>
</body>
</html>
"""
# Prepare the raw input prompt
input_prompt = commit_prompt.format(git_diff_example)
# Generate commit message
output = modelGGUF(
input_prompt,
max_tokens=64,
temperature=0.6, # Balanced randomness
top_p=0.8, # Controls nucleus sampling
top_k=50, # Limits vocabulary selection
)
# Decode and print the output
commit_message = output["choices"][0]["text"].strip()
print("\nGenerated Commit Message:\n{}".format(commit_message))
- Downloads last month
- 14
We're not able to determine the quantization variants.
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp# Start a local OpenAI-compatible server: llama-server -hf seniruk/qwen2.5coder-0.5B_commit_msg