Instructions to use zenlm/zen4-coder-abliterated with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use zenlm/zen4-coder-abliterated with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="zenlm/zen4-coder-abliterated")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("zenlm/zen4-coder-abliterated", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use zenlm/zen4-coder-abliterated with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "zenlm/zen4-coder-abliterated" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "zenlm/zen4-coder-abliterated", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/zenlm/zen4-coder-abliterated
- SGLang
How to use zenlm/zen4-coder-abliterated with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "zenlm/zen4-coder-abliterated" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "zenlm/zen4-coder-abliterated", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "zenlm/zen4-coder-abliterated" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "zenlm/zen4-coder-abliterated", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use zenlm/zen4-coder-abliterated with Docker Model Runner:
docker model run hf.co/zenlm/zen4-coder-abliterated
Zen4 Coder — Safety Research
Parameters: 80B MoE | Architecture: Zen 4 Architecture | Context: 256K | License: Apache 2.0
Frontier code model fine-tuned for security research, vulnerability analysis, and adversarial code evaluation.
Weights at zenlm/zen4-coder.
AI Safety Research Variant
This model is part of Hanzo AI and Zoo Labs Foundation's alignment research program. We study how safety constraints interact with model capability to build better, safer AI systems.
Research applications: Red-teaming, alignment evaluation, safety benchmarking, capability assessment, adversarial testing. Why this matters: Understanding model behavior without safety overlays is essential for building more robust and genuinely safe AI systems.
The Zen LM Family
Joint research between Hanzo AI (Techstars '17), Zoo Labs Foundation (501c3), and Lux Partners Limited.
All weights Apache 2.0. Download, run locally, fine-tune, deploy commercially.
HuggingFace · Chat · API · Docs
Model tree for zenlm/zen4-coder-abliterated
Base model
zenlm/zen4-coder