mythos : GGUF

This model was finetuned on the Opus 4.7 dataset (using ~40,000 high-quality samples) and converted to GGUF format.

Credit: Finetuned efficiently using Unsloth.

Example usage:

  • For text only LLMs: llama-cli -hf Shadow0482/mythos --jinja
  • For multimodal models: llama-mtmd-cli -hf Shadow0482/mythos --jinja

Available Model files:

  • gemma-4-E2B-it-Uncensored-MAX.Q5_K_M.gguf
  • gemma-4-E2B-it-Uncensored-MAX.BF16-mmproj.gguf

Training Details

The model was fine-tuned on the Opus 4.7 dataset using approximately 40,000 samples. This dataset consists of high-quality instruction-response pairs (including advanced Chain-of-Thought reasoning traces, typically generated by Claude Opus 4.7 for superior reasoning and instruction-following capabilities).

Detailed Training Steps:

  1. Dataset Preparation:

    • Acquired/gathered the Opus 4.7 dataset containing ~40,000 high-quality samples.
    • Performed data cleaning, deduplication, and quality filtering to remove low-quality or redundant entries.
    • Formatted all samples into the appropriate instruction-tuning/chat template (compatible with Gemma models, using system/user/assistant roles and multimodal support where applicable).
    • Split the dataset into training and validation sets (typically 95/5 ratio).
  2. Environment Setup:

    • Set up a training environment with Hugging Face Transformers, TRL, PEFT, and the necessary GPU resources (multi-GPU setup with high VRAM).
    • Loaded the base model in 4-bit quantization for memory efficiency during training.
  3. Model Configuration:

    • Applied LoRA (Low-Rank Adaptation) adapters for parameter-efficient fine-tuning on the base Gemma-4-E2B-it model.
    • Configured the training pipeline for supervised fine-tuning (SFT), including proper handling of vision-language components (text + image projector).
  4. Training:

    • Ran supervised fine-tuning on the 40,000 prepared samples.
    • Monitored training loss, validation metrics, and adjusted hyperparameters as needed (learning rate, batch size, number of epochs, warmup steps, LoRA rank/alpha, etc.).
    • Completed the full training run to produce the fine-tuned "mythos" model while preserving the uncensored behavior of the base.
  5. Post-Training Processing:

    • Merged the LoRA adapters back into the base model weights.
    • Saved the resulting fine-tuned model in Hugging Face format.
  6. GGUF Conversion & Quantization:

    • Converted the fine-tuned model to GGUF format using the official llama.cpp tools.
    • Generated the main model file in Q5_K_M quantization (balanced quality/size).
    • Converted the multimodal projector (mmproj) to BF16-mmproj.gguf format.
    • Verified model integrity and basic functionality post-conversion.

This process produced a high-performance, uncensored vision-language model optimized for both text-only and multimodal inference with llama.cpp.

Downloads last month
555
GGUF
Model size
5B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Shadow0482/mythos

Quantized
(150)
this model