Instructions to use Shadow0482/iris with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use Shadow0482/iris with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Shadow0482/iris", filename="gemma-4-e2b-it.BF16-mmproj.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Shadow0482/iris with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Shadow0482/iris:BF16 # Run inference directly in the terminal: llama-cli -hf Shadow0482/iris:BF16
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Shadow0482/iris:BF16 # Run inference directly in the terminal: llama-cli -hf Shadow0482/iris:BF16
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Shadow0482/iris:BF16 # Run inference directly in the terminal: ./llama-cli -hf Shadow0482/iris:BF16
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Shadow0482/iris:BF16 # Run inference directly in the terminal: ./build/bin/llama-cli -hf Shadow0482/iris:BF16
Use Docker
docker model run hf.co/Shadow0482/iris:BF16
- LM Studio
- Jan
- vLLM
How to use Shadow0482/iris with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Shadow0482/iris" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Shadow0482/iris", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/Shadow0482/iris:BF16
- Ollama
How to use Shadow0482/iris with Ollama:
ollama run hf.co/Shadow0482/iris:BF16
- Unsloth Studio new
How to use Shadow0482/iris with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Shadow0482/iris to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Shadow0482/iris to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Shadow0482/iris to start chatting
- Pi new
How to use Shadow0482/iris with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Shadow0482/iris:BF16
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "Shadow0482/iris:BF16" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use Shadow0482/iris with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Shadow0482/iris:BF16
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default Shadow0482/iris:BF16
Run Hermes
hermes
- Docker Model Runner
How to use Shadow0482/iris with Docker Model Runner:
docker model run hf.co/Shadow0482/iris:BF16
- Lemonade
How to use Shadow0482/iris with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Shadow0482/iris:BF16
Run and chat with the model
lemonade run user.iris-BF16
List all available models
lemonade list
iris : GGUF
This model was finetuned on the Opus 4.6 dataset (using ~1,00,000 high-quality samples) and converted to GGUF format.
Credit: Finetuned efficiently using Unsloth.
Example usage:
- For text only LLMs:
llama-cli -hf Shadow0482/iris --jinja - For multimodal models:
llama-mtmd-cli -hf Shadow0482/iris --jinja
Available Model files:
gemma-4-e2b-it.Q4_K_M.ggufgemma-4-e2b-it.BF16-mmproj.gguf
⚠️ Ollama Note for Vision Models
Important: Ollama currently does not support separate mmproj files for vision models.
To create an Ollama model from this vision model:
- Place the
Modelfilein the same directory as the finetuned bf16 merged model - Run:
ollama create model_name -f ./Modelfile(Replacemodel_namewith your desired name)
This will create a unified bf16 model that Ollama can use.
Training Details
The model was fine-tuned on the Opus 4.6 dataset using approximately 1,00,000 samples. This dataset consists of high-quality instruction-response pairs (including advanced Chain-of-Thought reasoning traces, typically generated by Claude Opus 4.7 for superior reasoning and instruction-following capabilities).
Detailed Training Steps:
Dataset Preparation:
- Acquired/gathered the Opus 4.6 dataset containing ~1.00,000 high-quality samples.
- Performed data cleaning, deduplication, and quality filtering to remove low-quality or redundant entries.
- Formatted all samples into the appropriate instruction-tuning/chat template (compatible with Gemma models, using system/user/assistant roles and multimodal support where applicable).
- Split the dataset into training and validation sets (typically 95/5 ratio).
Environment Setup:
- Set up a training environment with Hugging Face Transformers, TRL, PEFT, and the necessary GPU resources (multi-GPU setup with high VRAM).
- Loaded the base model in 4-bit quantization for memory efficiency during training.
Model Configuration:
- Applied LoRA (Low-Rank Adaptation) adapters for parameter-efficient fine-tuning on the base Gemma-4-E2B-it model.
- Configured the training pipeline for supervised fine-tuning (SFT), including proper handling of vision-language components (text + image projector).
Training:
- Ran supervised fine-tuning on the 40,000 prepared samples.
- Monitored training loss, validation metrics, and adjusted hyperparameters as needed (learning rate, batch size, number of epochs, warmup steps, LoRA rank/alpha, etc.).
- Completed the full training run to produce the fine-tuned "iris" model while preserving the uncensored behavior of the base.
Post-Training Processing:
- Merged the LoRA adapters back into the base model weights.
- Saved the resulting fine-tuned model in Hugging Face format.
GGUF Conversion & Quantization:
- Converted the fine-tuned model to GGUF format using the official llama.cpp tools.
- Generated the main model file in Q4_K_M quantization.
- Converted the multimodal projector (mmproj) to
BF16-mmproj.ggufformat. - Verified model integrity and basic functionality post-conversion.
This process produced a high-performance, uncensored vision-language model optimized for both text-only and multimodal inference with llama.cpp.
- Downloads last month
- 136
4-bit
docker model run hf.co/Shadow0482/iris:BF16