File size: 4,011 Bytes
fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 fcb2b04 b6ae7b8 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 | # Stack 2.9 Training Pipeline
Complete training infrastructure to fine-tune Stack 2.9 (Qwen2.5-Coder-32B) into a super-intelligent coding model.
## Overview
This pipeline provides:
1. **Data Preparation** (`prepare_data.py`) - Load, clean, format, deduplicate, and filter training data
2. **LoRA Training** (`train_lora.py`) - Fine-tune with LoRA adapters
3. **Model Merging** (`merge_adapter.py`) - Merge LoRA weights back to base model
4. **AWQ Quantization** (optional) - Quantize for efficient inference
## Requirements
- Python 3.10+
- CUDA-compatible GPU (recommended)
- 32GB+ VRAM for base model (48GB+ recommended for training)
- 128GB+ system RAM
## Quick Start
```bash
# Install dependencies
pip install -r requirements.txt
# Run complete pipeline
./run_training.sh
```
## Individual Steps
### 1. Prepare Data
```bash
python prepare_data.py --config train_config.yaml
```
Features:
- Loads JSONL training data
- Handles multiple formats (messages, instruction/response, prompt/completion)
- Deduplication via content hash
- Quality filtering (min/max length, response presence)
- Tokenization with chat template
- 90/10 train/eval split
### 2. Train LoRA
```bash
python train_lora.py --config train_config.yaml
```
Features:
- 4-bit quantization (bitsandbytes)
- LoRA: r=64, alpha=128
- Target modules: [q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj]
- Gradient checkpointing
- Mixed precision (FP16/BF16)
- wandb/tensorboard logging
- Checkpointing and resume
### 3. Merge Adapter
```bash
python merge_adapter.py --config train_config.yaml
```
Features:
- Merges LoRA into base model
- Exports to HuggingFace format
- Optional AWQ quantization
## Configuration
All hyperparameters are in `train_config.yaml`:
```yaml
# Model
model:
name: "Qwen/Qwen2.5-Coder-32B"
torch_dtype: "bfloat16"
# Data
data:
input_path: "path/to/examples.jsonl"
max_length: 131072
# LoRA
lora:
r: 64
alpha: 128
target_modules: [...]
# Training
training:
num_epochs: 3
batch_size: 1
gradient_accumulation: 16
learning_rate: 1.0e-4
```
## File Structure
```
stack-2.9-training/
βββ train_config.yaml # All hyperparameters
βββ prepare_data.py # Data preparation
βββ train_lora.py # LoRA training
βββ merge_adapter.py # Model merging
βββ run_training.sh # One-command pipeline
βββ requirements.txt # Python dependencies
βββ data/ # Processed datasets
β βββ train/
β βββ eval/
βββ output/ # Trained models
βββ stack-2.9-lora/ # LoRA adapter
βββ stack-2.9-merged/ # Merged model
βββ stack-2.9-awq/ # Quantized model
```
## Hardware Requirements
| Config | GPU VRAM | System RAM | Training Time |
|--------|----------|------------|---------------|
| Minimum | 32GB | 64GB | 12-24h |
| Recommended | 48GB+ | 128GB+ | 6-12h |
| Optimal | 80GB (A100) | 256GB+ | 4-8h |
## Usage After Training
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load quantized model
model = AutoModelForCausalLM.from_pretrained(
"output/stack-2.9-awq",
torch_dtype=torch.float16,
load_in_4bit=True,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-Coder-32B")
# Generate
prompt = "Write a Python function to calculate factorial"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## Troubleshooting
### Memory Issues
- Reduce `batch_size` in config
- Enable gradient checkpointing
- Use 4-bit quantization
### CUDA Errors
- Verify CUDA drivers: `nvidia-smi`
- Check PyTorch CUDA: `python -c "import torch; print(torch.cuda.is_available())"`
### Data Errors
- Verify JSONL format
- Check required columns: messages OR instruction/response
## License
Provided as-is for educational and research purposes. |