Instructions to use SparseLLM/DECO-1.2B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use SparseLLM/DECO-1.2B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="SparseLLM/DECO-1.2B", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("SparseLLM/DECO-1.2B", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use SparseLLM/DECO-1.2B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "SparseLLM/DECO-1.2B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SparseLLM/DECO-1.2B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/SparseLLM/DECO-1.2B
- SGLang
How to use SparseLLM/DECO-1.2B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "SparseLLM/DECO-1.2B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SparseLLM/DECO-1.2B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "SparseLLM/DECO-1.2B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SparseLLM/DECO-1.2B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use SparseLLM/DECO-1.2B with Docker Model Runner:
docker model run hf.co/SparseLLM/DECO-1.2B
| language: | |
| - en | |
| - zh | |
| license: apache-2.0 | |
| pipeline_tag: text-generation | |
| library_name: transformers | |
| # DECO-1.2B | |
| This is the 1.2B DECO checkpoint introduced by the paper [DECO: Sparse Mixture-of-Experts with Dense-Comparable Performance on End-Side Devices](https://huggingface.co/papers/2605.10933). | |
| DECO (Sparse Mixture-of-Experts with Dense-Comparable Performance on End-Side Devices) is a sparse MoE architecture designed to match the performance of dense Transformers under identical total parameter budgets and training tokens. It is an improved version of the [BlockFFN](https://arxiv.org/pdf/2507.08771) architecture. | |
| - **Authors:** Chenyang Song, Weilin Zhao, Xu Han, Chaojun Xiao, Yingfa Chen, Zhiyuan Liu | |
| - **Paper:** [arXiv:2605.10933](https://huggingface.co/papers/2605.10933) | |
| - **Code:** [https://github.com/thunlp/DECO](https://github.com/thunlp/DECO) | |
| ### Quick start | |
| You can load and use this model with `AutoTokenizer` and `AutoModelForCausalLM` from `transformers`. Since the model uses a custom architecture, `trust_remote_code=True` is required. | |
| ```python | |
| import torch | |
| from transformers import AutoTokenizer, AutoModelForCausalLM | |
| model_id = "SparseLLM/DECO-1.2B" | |
| tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) | |
| model = AutoModelForCausalLM.from_pretrained( | |
| model_id, | |
| torch_dtype=torch.bfloat16, | |
| trust_remote_code=True, | |
| ).to("cuda").eval() | |
| prompt = "Mixture-of-Experts models are useful because" | |
| inputs = tokenizer(prompt, return_tensors="pt").to("cuda") | |
| with torch.no_grad(): | |
| output = model.generate(**inputs, max_new_tokens=64, do_sample=False) | |
| print(tokenizer.decode(output[0], skip_special_tokens=True)) | |
| ``` | |
| ### Citation | |
| If you find our work useful for your research, please kindly cite our paper as follows: | |
| ```bibtex | |
| @article{song2026deco, | |
| title={{DECO}: Sparse Mixture-of-Experts with Dense-Comparable Performance on End-Side Devices}, | |
| author={Chenyang Song, Weilin Zhao, Xu Han, Chaojun Xiao, Yingfa Chen, Zhiyuan Liu}, | |
| journal={arXiv preprint arXiv:2605.10933}, | |
| year={2026}, | |
| url={https://arxiv.org/pdf/2605.10933}, | |
| } | |
| ``` |