| --- |
| tags: |
| - gguf |
| - llama.cpp |
| - vision-language-model |
| base_model: |
| - google/gemma-4-E2B-it |
| pipeline_tag: image-text-to-text |
| --- |
| |
| # iris : GGUF |
|
|
| This model was finetuned on the **Opus 4.6 dataset** (using ~1,00,000 high-quality samples) and converted to GGUF format. |
|
|
| **Credit**: Finetuned efficiently using [Unsloth](https://github.com/unslothai/unsloth). |
|
|
| **Example usage**: |
| - For text only LLMs: `llama-cli -hf Shadow0482/iris --jinja` |
| - For multimodal models: `llama-mtmd-cli -hf Shadow0482/iris --jinja` |
|
|
| ## Available Model files: |
| - `gemma-4-e2b-it.Q4_K_M.gguf` |
| - `gemma-4-e2b-it.BF16-mmproj.gguf` |
|
|
| ## ⚠️ Ollama Note for Vision Models |
| **Important:** Ollama currently does not support separate mmproj files for vision models. |
|
|
| To create an Ollama model from this vision model: |
| 1. Place the `Modelfile` in the same directory as the finetuned bf16 merged model |
| 2. Run: `ollama create model_name -f ./Modelfile` |
| (Replace `model_name` with your desired name) |
|
|
| This will create a unified bf16 model that Ollama can use. |
|
|
| ## Training Details |
|
|
| The model was fine-tuned on the **Opus 4.6 dataset** using approximately 1,00,000 samples. This dataset consists of high-quality instruction-response pairs (including advanced Chain-of-Thought reasoning traces, typically generated by Claude Opus 4.7 for superior reasoning and instruction-following capabilities). |
|
|
| ### Detailed Training Steps: |
|
|
| 1. **Dataset Preparation**: |
| - Acquired/gathered the Opus 4.6 dataset containing ~1.00,000 high-quality samples. |
| - Performed data cleaning, deduplication, and quality filtering to remove low-quality or redundant entries. |
| - Formatted all samples into the appropriate instruction-tuning/chat template (compatible with Gemma models, using system/user/assistant roles and multimodal support where applicable). |
| - Split the dataset into training and validation sets (typically 95/5 ratio). |
|
|
| 2. **Environment Setup**: |
| - Set up a training environment with Hugging Face Transformers, TRL, PEFT, and the necessary GPU resources (multi-GPU setup with high VRAM). |
| - Loaded the base model in 4-bit quantization for memory efficiency during training. |
|
|
| 3. **Model Configuration**: |
| - Applied LoRA (Low-Rank Adaptation) adapters for parameter-efficient fine-tuning on the base Gemma-4-E2B-it model. |
| - Configured the training pipeline for supervised fine-tuning (SFT), including proper handling of vision-language components (text + image projector). |
|
|
| 4. **Training**: |
| - Ran supervised fine-tuning on the 40,000 prepared samples. |
| - Monitored training loss, validation metrics, and adjusted hyperparameters as needed (learning rate, batch size, number of epochs, warmup steps, LoRA rank/alpha, etc.). |
| - Completed the full training run to produce the fine-tuned "iris" model while preserving the uncensored behavior of the base. |
|
|
| 5. **Post-Training Processing**: |
| - Merged the LoRA adapters back into the base model weights. |
| - Saved the resulting fine-tuned model in Hugging Face format. |
|
|
| 6. **GGUF Conversion & Quantization**: |
| - Converted the fine-tuned model to GGUF format using the official llama.cpp tools. |
| - Generated the main model file in Q4_K_M quantization. |
| - Converted the multimodal projector (mmproj) to `BF16-mmproj.gguf` format. |
| - Verified model integrity and basic functionality post-conversion. |
|
|
| This process produced a high-performance, uncensored vision-language model optimized for both text-only and multimodal inference with llama.cpp. |