Method & Process
This model was processed using the Obliteratus methodology. The fine-tuning/transformation was executed via the official notebook provided by [pliny-the-prompter](https://huggingface.co/spaces/pliny-the-prompter/obliteratus
Quantization & Inference Engine
- Framework: llama.cpp
- Format: GGUF (v3)
- Original Authors: Georgi Gerganov and the GGML contributors.
References & Credits
- Original Methodology: Obliteratus by pliny-the-prompter.
- Tooling: Jupyter Notebook implementation from the Obliteratus Space.
- Downloads last month
- 198
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support