Hemlock-Codex-7B

Training Configuration

Parameter Value
Training Mode SFT
Base Model hemlang/Hemlock2-Coder-7B
Learning Rate 0.0001
Epochs 3
Batch Size 2
Gradient Accumulation 16
Effective Batch Size 32
Max Sequence Length 8192
Optimizer paged_adamw_8bit
LR Scheduler cosine
Warmup Ratio 0.05
Weight Decay 0.01
Max Grad Norm 0.25
Seed 42
LoRA Rank (r) 128
LoRA Alpha 128
LoRA Dropout 0.05
Target Modules k_proj, o_proj, q_proj, v_proj, down_proj, gate_proj, up_proj
Quantization 4-bit (NF4)
GPU NVIDIA RTX A6000

Trained with Merlina

Merlina on GitHub

Downloads last month
-
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for hemlang/Hemlock-Codex-7B

Base model

Qwen/Qwen2.5-7B
Finetuned
(1)
this model

Dataset used to train hemlang/Hemlock-Codex-7B