Instructions to use SolusOps/Lizzy-7B-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use SolusOps/Lizzy-7B-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="SolusOps/Lizzy-7B-GGUF", filename="lizzy-7b-Q3_K_M.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use SolusOps/Lizzy-7B-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf SolusOps/Lizzy-7B-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf SolusOps/Lizzy-7B-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf SolusOps/Lizzy-7B-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf SolusOps/Lizzy-7B-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf SolusOps/Lizzy-7B-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf SolusOps/Lizzy-7B-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf SolusOps/Lizzy-7B-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf SolusOps/Lizzy-7B-GGUF:Q4_K_M
Use Docker
docker model run hf.co/SolusOps/Lizzy-7B-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use SolusOps/Lizzy-7B-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "SolusOps/Lizzy-7B-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SolusOps/Lizzy-7B-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/SolusOps/Lizzy-7B-GGUF:Q4_K_M
- Ollama
How to use SolusOps/Lizzy-7B-GGUF with Ollama:
ollama run hf.co/SolusOps/Lizzy-7B-GGUF:Q4_K_M
- Unsloth Studio new
How to use SolusOps/Lizzy-7B-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for SolusOps/Lizzy-7B-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for SolusOps/Lizzy-7B-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for SolusOps/Lizzy-7B-GGUF to start chatting
- Pi new
How to use SolusOps/Lizzy-7B-GGUF with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf SolusOps/Lizzy-7B-GGUF:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "SolusOps/Lizzy-7B-GGUF:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use SolusOps/Lizzy-7B-GGUF with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf SolusOps/Lizzy-7B-GGUF:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default SolusOps/Lizzy-7B-GGUF:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use SolusOps/Lizzy-7B-GGUF with Docker Model Runner:
docker model run hf.co/SolusOps/Lizzy-7B-GGUF:Q4_K_M
- Lemonade
How to use SolusOps/Lizzy-7B-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull SolusOps/Lizzy-7B-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.Lizzy-7B-GGUF-Q4_K_M
List all available models
lemonade list
Lizzy-7B GGUF Quants
π¨ Update: Flower Labs has officially released their native GGUF quants. I highly recommend transitioning to their repository for the most stable inference and the corrected 32k context window: flwrlabs/Lizzy-7B-GGUF.
Note: During testing, I came across a bug with rope/context length issue, which has been patched in the official release. Thanks to the 250+ community members who tested this early build!
Quantized by SolusOps
Original model: FlowerLabs/Lizzy-7B
Official Quants: flwrlabs/Lizzy-7B-GGUF
About This Repo
This repository provides llama.cpp-compatible GGUF quants of Lizzy-7B, a UK-centric 7B language model built by Flower Labs. Refer to the original model card for more details on the model.
Available Quants
| File | Quant | Size | Use Case |
|---|---|---|---|
Lizzy-7B-f16.gguf |
F16 | ~14.6 GB | needs 20GB+ VRAM or CPU offload. |
Lizzy-7B-Q8_0.gguf |
Q8_0 | ~7.7 GB | Recommended fits 12GB VRAM with excellent context headroom. |
Lizzy-7B-Q6_K.gguf |
Q6_K | ~5.9 GB | for 10GBβ12GB GPUs looking to maximize context size. |
Lizzy-7B-Q5_K_M.gguf |
Q5_K_M | ~5.1 GB | 8GB VRAM |
Lizzy-7B-Q4_K_M.gguf |
Q4_K_M | ~4.1 GB | 6GBβ8GB GPUs. |
Lizzy-7B-Q3_K_M.gguf |
Q3_K_M | ~3.5 GB | edge devices, 4GB GPUs, or older laptops. |
Hardware Tested
| Hardware | Quant | n_ctx | Speed |
|---|---|---|---|
| RTX 3060 12GB | Q8_0 | 8192 | ~23 tok/s |
| RTX 3060 12GB | F16 | 4096 | Slower (VRAM overflow to RAM) |
Conversion Notes
1. Architecture: OLMo 2 Post-Norm Tensor Mapping
Lizzy-7B uses a Post-Norm variant of OLMo 2. The standard convert_hf_to_gguf.py script does not recognise Flower Labs tensor naming conventions (post_attn_norm, post_mlp_norm) and will fail or silently produce a broken file. The fix was to register a LizzyForCausalLM model class in the llama.cpp conversion script, subclassing Olmo2Model and overriding modify_tensors() to remap the four divergent tensor names:
python@ModelBase.register("LizzyForCausalLM")
class LizzyModel(Olmo2Model):
def modify_tensors(self, data_torch: Tensor, name: str, bid: int | None) -> Iterable[tuple[str, Tensor]]:
# 1. Lizzy: post_attn_norm -> llama.cpp: post_attention_norm
if name.endswith(".post_attn_norm.weight"):
yield (f"blk.{bid}.post_attention_norm.weight", data_torch)
return
# 2. Lizzy: post_mlp_norm -> llama.cpp: post_ffw_norm
if name.endswith(".post_mlp_norm.weight"):
yield (f"blk.{bid}.post_ffw_norm.weight", data_torch)
return
# 3. QK-Norms these mapped correctly via standard paths
if name.endswith(".q_norm.weight"):
yield (self.format_tensor_name(gguf.MODEL_TENSOR.ATTN_Q_NORM, bid), data_torch)
return
if name.endswith(".k_norm.weight"):
yield (self.format_tensor_name(gguf.MODEL_TENSOR.ATTN_K_NORM, bid), data_torch)
return
# 4. All other tensors β pass through normally
yield from super().modify_tensors(data_torch, name, bid)
No weights were altered. Only the tensor name metadata was remapped.
2. RoPE Scaling Factor Correction
During conversion, the script raised this warning:
The explicitly set RoPE scaling factor (config.rope_parameters['factor'] = 8.0)
does not match the ratio implicitly set by other parameters
(implicit factor = max_position_embeddings / original_max_position_embeddings = 4.0).
Using the explicit factor (8.0) in YaRN. This may cause unexpected behaviour.
The implicit factor (4.0) is mathematically derived from the model's own position embedding settings. The explicit 8.0 in the upstream config appears to be an authoring error. To produce a consistent and correctly-behaving GGUF, the factor was corrected from 8.0 to 4.0 in config.json before conversion.
This means the effective context window for these GGUFs reflects the 4.0Γ YaRN scaling, not 8.0Γ. If Flower Labs corrects the upstream config, a re-conversion would be straightforward.
License
The original Lizzy-7B model is released under Apache 2.0 by Flower Labs. These quants inherit that license.
Links
About Me
This GGUF port was completed by Anshuman Singh.
- GitHub: github.com/SolusOps
- LinkedIn: linkedin.com/in/anshumansingh2023
If this port helped your local deployment, feel free to connect!
- Downloads last month
- 392
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for SolusOps/Lizzy-7B-GGUF
Base model
flwrlabs/Lizzy-7B