Qwen3-Coder-Next 64B REAP - GGUF
Quantized GGUF versions of 0xSero/qwen3-coder-next-64b-REAP.
These were generated using the default settings with llama-quantize (b8740).
Quantizations provided
| File | Quantization | Size |
|---|---|---|
qwen3-coder-next-64b-REAP-Q4_K_M.gguf |
Q4_K_M | 39.1 GB |
qwen3-coder-next-64b-REAP-Q5_K_M.gguf |
Q5_K_M | 45.8 GB |
qwen3-coder-next-64b-REAP-Q6_K.gguf |
Q6_K | 52.9 GB |
qwen3-coder-next-64b-REAP-Q8_0.gguf |
Q8_0 | 68.4 GB |
Perplexity test
I tested perplexity using llama-perplexity and Salesforce's wikitext-2-raw-v1.
| File | Quantization | Ctx | PPL |
|---|---|---|---|
qwen3-coder-next-64b-REAP-Q4_K_M.gguf |
Q4_K_M | 512 | 12.6123 +/- 0.10518 |
qwen3-coder-next-64b-REAP-Q5_K_M.gguf |
Q5_K_M | 512 | 12.5573 +/- 0.10461 |
qwen3-coder-next-64b-REAP-Q6_K.gguf |
Q6_K | 512 | 12.4087 +/- 0.10285 |
qwen3-coder-next-64b-REAP-Q8_0.gguf |
Q8_0 | 512 | 12.4389 +/- 0.10323 |
qwen3-coder-next-64b-REAP-BF16.gguf |
BF16 | 512 | 12.4162 +/- 0.10302 |
- Downloads last month
- 623
Hardware compatibility
Log In to add your hardware
4-bit
5-bit
6-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for CodeFault/Qwen3-Coder-Next-64B-REAP-GGUF
Base model
Qwen/Qwen3-Coder-Next Finetuned
0xSero/qwen3-coder-next-64b-REAP