Quantized versions of coder3101/gpt-oss-20b-heretic. This is a new abliteration method that doesn't seem to be as damaging to the base model as the previous methods were.
WARNING: Due to the nature of GPT-OSS, and its MXFP4 weight format, alterations (tuning, abliteration, quant) have to be done very carefully. The person who made the abliteration pass with Heretic erased the MXFP4 format (maybe a requirement for Heretic, maybe not, I have no idea). This means that my quantization (and any other for this model) is an approximation of an approximation of GPT-OSS's weights.
As a result, do not expect a quality comparable to the official non-abliterated model. This is a toy, not a production-ready model.
The repo includes the following quantized versions (Q8 or MXFP4 recommended, the other variants are not stable):
Okay Models
- Q8_0: This should be decent-enough. Allows for 8-12K context on 24GB VRAM.
- MXFP4: This is NOT a fix to the problem, and will behave worse than Q8, but it's lightweight and seems more stable than IQ4_NL (didn't test much, ngl)
Experimental Models (they do weird shit)
- Q5_1: Technically it's Q5_K_M but in practice it's Q5_1 with Q8 attention layers. Unstable, not recommended.
- IQ4_NL: Should work, but will be worse than what IQ4_NL would be on other models, while using a bit more VRAM. Unstable, not recommended.
Why No Qx_K_x Models?!
Simply put, K-type quantization schemes are not compatible with GPT-OSS's architecture, at least not without serious voodoo magic. 90% of the layers can't use those K-type formats, so llama.cpp's quantization tool automatically converts them to what works, usually Q8 (or Q5_1).
That's why when you look for quantized versions of GPT-OSS, you'll see tons of repos with supposedly different quants but with the same filesize everywhere.
- Downloads last month
- 1,210