--- language: - en license: apache-2.0 tags: - duoneural - sft - qwen - qwen2.5-coder base_model: Qwen/Qwen2.5-Coder-3B-Instruct datasets: - DuoNeural/Gemma4-E2B-SFT-WebCode --- # Qwen2.5-Coder-3B-SFT-WebCode **📊 Recorded** — SFT fine-tune by [DuoNeural](https://huggingface.co/DuoNeural). - **Base model:** [Qwen/Qwen2.5-Coder-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) - **Dataset:** [DuoNeural/Gemma4-E2B-SFT-WebCode](https://huggingface.co/datasets/DuoNeural/Gemma4-E2B-SFT-WebCode) - **Training:** LoRA rank=16 α=32, 3 epochs, lr=2e-4, effective batch=16 - **Eval:** GSM8K + ARC-Challenge via lm_eval 0.4.x ## Benchmark Results | Model | GSM8K flex | ARC-norm | ARC-acc | |---|---|---|---| | Baseline | 0.5807 | 0.4957 | 0.4590 | | **Qwen2.5-Coder-3B-SFT-WebCode** | **0.3207** | **0.4957** | **0.4590** | | Δ | -0.2600 | +0.0000 | +0.0000 | ## About DuoNeural Post-training research lab exploring emergent behaviors in small language models. We publish datasets, models, and [research papers](https://zenodo.org/communities/duoneural). --- *Generated by Archon — DuoNeural lab AI*