Instructions to use MetaIX/Guanaco-33B-4bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MetaIX/Guanaco-33B-4bit with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="MetaIX/Guanaco-33B-4bit")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MetaIX/Guanaco-33B-4bit") model = AutoModelForCausalLM.from_pretrained("MetaIX/Guanaco-33B-4bit") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use MetaIX/Guanaco-33B-4bit with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "MetaIX/Guanaco-33B-4bit" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MetaIX/Guanaco-33B-4bit", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/MetaIX/Guanaco-33B-4bit
- SGLang
How to use MetaIX/Guanaco-33B-4bit with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "MetaIX/Guanaco-33B-4bit" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MetaIX/Guanaco-33B-4bit", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "MetaIX/Guanaco-33B-4bit" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MetaIX/Guanaco-33B-4bit", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use MetaIX/Guanaco-33B-4bit with Docker Model Runner:
docker model run hf.co/MetaIX/Guanaco-33B-4bit
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Information
Guanaco 33b working with Oobabooga's Text Generation Webui and KoboldAI.This is a quantized version of Tim Dettmers' Guanaco 33b.
What's included
GPTQ: 2 quantized versions. One quantized --true-sequential and act-order optimizations, and the other was quantized using --true-sequential --groupsize 128 optimizations.
GGML: 3 quantized versions. One quantized using q4_1, another was quantized using q5_0, and the last one was quantized using q5_1.
GPU/GPTQ Usage
To use with your GPU using GPTQ pick one of the .safetensors along with all of the .jsons and .model files.
Oobabooga: If you require further instruction, see here and here
KoboldAI: If you require further instruction, see here
CPU/GGML Usage
To use your CPU using GGML(Llamacpp) you only need the single .bin ggml file.
Oobabooga: If you require further instruction, see here
KoboldAI: If you require further instruction, see here
Benchmarks
--true-sequential --act-order
Wikitext2: 4.582493305206299
Ptb-New: 8.697775840759277
C4-New: 6.67733097076416
Note: This version does not use --groupsize 128, therefore evaluations are minimally higher. However, this version allows fitting the whole model at full context using only 24GB VRAM.
--true-sequential --groupsize 128
Wikitext2: 4.369843006134033
Ptb-New: 8.53034496307373
C4-New: 6.496636390686035
Note: This version uses --groupsize 128, resulting in better evaluations. However, it consumes more VRAM.
- Downloads last month
- 26