language:
- en
license: apache-2.0
base_model: Qwen/Qwen2.5-Math-7B-Instruct
tags:
- math
- reasoning
- qwen2.5
- lora
- duoneural
- fine-tuned
datasets:
- HuggingFaceTB/finemath
- AI-MO/NuminaMath-CoT
model-index:
- name: Qwen2.5-Math-NeuralMath-7B
results: []
Qwen2.5-Math-NeuralMath-7B
DuoNeural | Math Reasoning Fine-Tune | April 2026
A fine-tuned version of Qwen/Qwen2.5-Math-7B-Instruct with supervised fine-tuning on curated math reasoning data, targeting improved step-by-step problem solving on competition and olympiad-level math.
What's Different
The base Qwen2.5-Math-7B-Instruct is already a strong math model. This fine-tune focuses on:
- Deeper chain-of-thought: trained on longer, more structured reasoning traces
- Competition math exposure: AMC/AIME/olympiad problems via NuminaMath-CoT
- Format consistency: reliable
\boxed{}answer formatting across problem types
Quickstart
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model = AutoModelForCausalLM.from_pretrained(
"DuoNeural/Qwen2.5-Math-NeuralMath-7B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("DuoNeural/Qwen2.5-Math-NeuralMath-7B")
prompt = """Solve the following math problem step by step.
Problem: Find all positive integers n such that nΒ² + 1 is divisible by n + 1.
Solution:"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=512, temperature=0.1, do_sample=True)
print(tokenizer.decode(output[0], skip_special_tokens=True))
GGUF / Ollama / LM Studio
Pre-quantized GGUFs available in the gguf/ folder of this repo:
| File | Size | Use case |
|---|---|---|
neuromath-7b-q4_k_m.gguf |
4.7GB | Recommended β best quality/speed tradeoff |
neuromath-7b-q8_0.gguf |
8.1GB | High quality, needs 10GB+ VRAM/RAM |
neuromath-7b-f16.gguf |
15GB | Full precision, GPU only |
Ollama
# Create Modelfile
cat > Modelfile << 'EOF'
FROM ./neuromath-7b-q4_k_m.gguf
SYSTEM "You are an expert mathematician. Solve problems step by step, showing all work clearly. Put your final answer in \\boxed{}."
PARAMETER temperature 0.1
PARAMETER num_ctx 4096
EOF
ollama create neuromath-7b -f Modelfile
ollama run neuromath-7b "What is the sum of all prime numbers less than 100?"
LM Studio
Download neuromath-7b-q4_k_m.gguf, load in LM Studio. Set system prompt:
"You are an expert mathematician. Solve problems step by step, showing all work. Put your final answer in \boxed{}."
Training Details
| Setting | Value |
|---|---|
| Base model | Qwen/Qwen2.5-Math-7B-Instruct |
| Method | QLoRA SFT (4-bit base, LoRA rank 16) |
| Training tokens | ~1.26M (3 epochs over curated math dataset) |
| LoRA alpha | 32 |
| LoRA targets | q, k, v, o, gate, up, down projections |
| Hardware | NVIDIA A100 80GB |
| Framework | Unsloth + HuggingFace Transformers |
| Sequence length | 1024 tokens |
Limitations
- Trained on English math problems; performance on other languages untested
- Very long multi-step proofs (>1024 tokens) may be truncated during generation
- This is the SFT-only checkpoint; GRPO reinforcement learning phase is planned as a follow-up
- Not intended for general conversation β math reasoning only
DuoNeural
DuoNeural is an open AI research lab β human + AI in collaboration.
| π€ HuggingFace | huggingface.co/DuoNeural |
| π GitHub | github.com/DuoNeural |
| π¦ X / Twitter | @DuoNeural |
| π§ Email | duoneural@proton.me |
| π¬ Newsletter | duoneural.beehiiv.com |
| β Support | buymeacoffee.com/duoneural |
| π Site | duoneural.com |
Research Team
- Jesse β Vision, hardware, direction
- Archon β AI lab partner, post-training, abliteration, experiments
- Aura β Research AI, literature synthesis, novel proposals
Raw updates from the lab: model drops, training results, findings. Subscribe at duoneural.beehiiv.com.