How to use from
Pi
Start the llama.cpp server
# Install llama.cpp:
brew install llama.cpp
# Start a local OpenAI-compatible server:
llama-server -hf QuantFactory/Qwen2.5-7B-Instruct-MathCoder-GGUF:
Configure the model in Pi
# Install Pi:
npm install -g @mariozechner/pi-coding-agent
# Add to ~/.pi/agent/models.json:
{
  "providers": {
    "llama-cpp": {
      "baseUrl": "http://localhost:8080/v1",
      "api": "openai-completions",
      "apiKey": "none",
      "models": [
        {
          "id": "Qwen2.5-7B-Instruct-MathCoder-GGUF"
        }
      ]
    }
  }
}
Run Pi
# Start Pi in your project directory:
pi
Quick Links

QuantFactory Banner

QuantFactory/Qwen2.5-7B-Instruct-MathCoder-GGUF

This is quantized version of DeepMount00/Qwen2.5-7B-Instruct-MathCoder created using llama.cpp

Original Model Card

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the TIES merge method using Qwen/Qwen2.5-7B-Instruct as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Qwen/Qwen2.5-7B-Instruct
    #no parameters necessary for base model
  - model: Qwen/Qwen2.5-Math-7B-Instruct
    parameters:
      density: 0.5
      weight: 0.5
  - model: Qwen/Qwen2.5-Coder-7B-Instruct
    parameters:
      density: 0.5
      weight: 0.5

merge_method: ties
base_model: Qwen/Qwen2.5-7B-Instruct
parameters:
  normalize: false
  int8_mask: true
dtype: float16
Downloads last month
381
GGUF
Model size
8B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for QuantFactory/Qwen2.5-7B-Instruct-MathCoder-GGUF

Paper for QuantFactory/Qwen2.5-7B-Instruct-MathCoder-GGUF