| --- |
| language: |
| - en |
| license: other |
| library_name: transformers |
| pipeline_tag: text-generation |
| tags: |
| - gguf |
| - hunyuan |
| - python |
| - code-generation |
| - code-assistant |
| - instruct |
| - conversational |
| - causal-lm |
| - full-finetune |
| base_model: |
| - tencent/Hunyuan-0.5B-Instruct |
| datasets: |
| - WithinUsAI/Python_GOD_Coder_Omniforge_AI_12k |
| - WithinUsAI/Python_GOD_Coder_5k |
| - WithinUsAI/Legend_Python_CoderV.1 |
| model-index: |
| - name: Hunyuan-PythonGOD-0.5B-GGUF |
| results: [] |
| --- |
| |
| # Hunyuan-PythonGOD-0.5B-GGUF |
|
|
| **Hunyuan-PythonGOD-0.5B-GGUF** is a compact Python-specialized coding model released in GGUF format for lightweight local inference. It is derived from a full fine-tune of `tencent/Hunyuan-0.5B-Instruct` and is aimed at code generation, Python scripting, debugging help, implementation tasks, and coding-oriented chat workflows. |
|
|
| This repo provides quantized GGUF builds for efficient use with llama.cpp-compatible runtimes and other GGUF-serving backends. |
|
|
| ## Model Details |
|
|
| ### Base Model |
| - **Base model:** `tencent/Hunyuan-0.5B-Instruct` |
| - **Architecture:** Causal decoder-only language model |
| - **Parameter scale:** ~0.5B |
| - **Specialization:** Python coding and general code-assistant behavior |
| - **Release format:** GGUF |
|
|
| ### Included Files |
| - `Hunyuan-PythonGOD-0.5B.Q4_K_M.gguf` |
| - `Hunyuan-PythonGOD-0.5B.Q5_K_M.gguf` |
| - `Hunyuan-PythonGOD-0.5B.f16.gguf` |
|
|
| ## Training Summary |
|
|
| This GGUF release is based on a **full fine-tune**, not an adapter-only export. |
|
|
| ### Training Datasets |
| - `WithinUsAI/Python_GOD_Coder_Omniforge_AI_12k` |
| - `WithinUsAI/Python_GOD_Coder_5k` |
| - `WithinUsAI/Legend_Python_CoderV.1` |
|
|
| ### Training Characteristics |
| - Full-parameter fine-tuning |
| - Python/code-oriented instruction tuning |
| - Exported as standard model weights before GGUF conversion |
| - Intended for compact coding assistance and local inference experimentation |
|
|
| ## Intended Uses |
|
|
| ### Good Fits |
| - Python function generation |
| - Python script writing |
| - Debugging assistance |
| - Automation script drafting |
| - Code-oriented local assistants |
| - Small-model coding experiments |
|
|
| ### Not Intended For |
| - Safety-critical software deployment without review |
| - Autonomous execution without sandboxing |
| - Guaranteed bug-free or secure code generation |
| - Medical, legal, or financial decision support |
|
|
| ## Quantization Notes |
|
|
| This repo includes multiple tradeoff points: |
|
|
| - **Q4_K_M**: smaller footprint, faster/lighter inference |
| - **Q5_K_M**: stronger quality-to-size balance |
| - **F16**: highest fidelity in this repo, larger memory cost |
|
|
| ## Example llama.cpp Usage |
|
|
| ```bash |
| ./llama-cli -m Hunyuan-PythonGOD-0.5B.Q5_K_M.gguf -p "Write a Python function that validates an email address." -n 256 |