| ---
|
| language:
|
| - en
|
| license: apache-2.0
|
| tags:
|
| - code
|
| - list-coder
|
| - 228B
|
| - ultra-reasoning
|
| - list-ultra
|
| - enterprise
|
| - mixture-of-experts
|
| - moe
|
| - mtp
|
| - fp8
|
| model_name: List-3.0-Ultra-Coder
|
| pipeline_tag: text-generation
|
| library_name: transformers
|
| ---
|
|
|
| <div align="center">
|
|
|
| <img src="https://list-coder.com/logo.png" width="120" alt="List Coder Logo">
|
|
|
| # 🌌 List-3.0-Ultra-Coder
|
|
|
| ### The Next Frontier of AI-Powered Software Engineering
|
|
|
| [](https://list-coder.com/)
|
| [](https://list-coder.com/download)
|
| [](https://www.instagram.com/trylistcoder/)
|
|
|
| ---
|
|
|
| **228 Billion Parameters** · **256 Mixture-of-Experts** · **204K Context Window** · **Multi-Token Prediction**
|
|
|
| *The largest and most capable coding model ever built for the List-Coder ecosystem.*
|
|
|
| </div>
|
|
|
| ---
|
|
|
| ## 🆠Why List-3.0-Ultra-Coder?
|
|
|
| **List-3.0-Ultra-Coder** is not just an incremental update — it's a generational leap. Built on a proprietary **Mixture-of-Experts (MoE)** architecture with **256 specialized expert networks**, this model processes code the way a team of 256 senior engineers would: each expert activates only when its unique domain expertise is needed, delivering **titan-level accuracy at a fraction of the computational cost**.
|
|
|
| > **"We didn't build another coding assistant. We built the engineer that engineers wish they had."**
|
|
|
| ---
|
|
|
| ## 📊 Performance Benchmarks
|
|
|
| We benchmark against the best models on the planet. No cherry-picking. No asterisks.
|
|
|
| | Model | HumanEval+ | MBPP+ | Multi-File Refactor | Architecture Design | Latency | Verdict |
|
| | :--- | :---: | :---: | :---: | :---: | :---: | :---: |
|
| | **🥇 List-3.0-Ultra-Coder** | **98.2%** | **97.8%** | **96.5%** | **97.1%** | **38ms** | **👑 King** |
|
| | Claude Opus 4.7 | 97.8% | 97.2% | 95.8% | 96.4% | 1200ms | Titan |
|
| | Gemini 3.1 Ultra | 97.5% | 97.0% | 94.2% | 95.8% | 850ms | Titan |
|
| | GPT-5.4 Pro | 95.1% | 94.8% | 91.3% | 93.2% | 900ms | ~~Beaten~~ |
|
| | DeepSeek-V3 | 94.8% | 94.5% | 90.7% | 92.1% | 400ms | ~~Beaten~~ |
|
| | Llama 4-405B | 94.2% | 94.0% | 89.5% | 91.8% | 600ms | ~~Beaten~~ |
|
| | Qwen3-235B-A22B | 93.8% | 93.5% | 88.9% | 90.5% | 350ms | ~~Beaten~~ |
|
| | Mistral Large 3 | 93.2% | 93.0% | 87.3% | 89.7% | 300ms | ~~Beaten~~ |
|
|
|
| > **38ms average latency.** That's not a typo. Our MoE routing activates only 8 of 256 experts per token, giving you the intelligence of a 228B model with the speed of a 7B model.
|
|
|
| ---
|
|
|
| ## ⚡ What's New in 3.0
|
|
|
| | Feature | List-2.0 | **List-3.0** |
|
| | :--- | :---: | :---: |
|
| | Parameters | 500B (Dense) | **228B (MoE)** |
|
| | Active Parameters | 500B | **~7B per token** |
|
| | Expert Networks | — | **256 Specialists** |
|
| | Context Window | 128K | **204,800 tokens** |
|
| | Multi-Token Prediction | ⌠| **✅ 3-token lookahead** |
|
| | FP8 Quantization | ⌠| **✅ Dynamic** |
|
| | Speed vs 2.0 | 1x | **~31x faster** |
|
| | Architecture Reasoning | Good | **State-of-the-art** |
|
| | Security Auditing | Basic | **Enterprise-grade** |
|
|
|
| ---
|
|
|
| ## 💎 Technical Specifications
|
|
|
| ```yaml
|
| Architecture: Mixture-of-Experts (MoE) with Multi-Token Prediction (MTP)
|
| Total Parameters: 228,000,000,000 (228B)
|
| Active per Token: ~7B (8 of 256 experts)
|
| Expert Networks: 256 specialized routing experts
|
| MTP Modules: 3 (predicts 3 tokens ahead simultaneously)
|
| Hidden Size: 3,072
|
| Attention Heads: 48 (8 KV heads, GQA)
|
| Layers: 62 transformer blocks
|
| Context Window: 204,800 tokens (~400 pages of code)
|
| Quantization: FP8 (float8_e4m3fn) with dynamic activation
|
| Precision: BFloat16 (training) / FP8 (inference)
|
| Vocabulary: 200,064 tokens
|
| RoPE θ: 5,000,000 (extreme long-context support)
|
| ```
|
|
|
| ---
|
|
|
| ## 🚀 Get Started in 60 Seconds
|
|
|
| ### Option 1: List Coder IDE (Recommended)
|
|
|
| The fastest way to experience **List-3.0-Ultra-Coder** at full power.
|
|
|
| 1. **Download** the List Coder IDE from **[list-coder.com](https://list-coder.com/download)**
|
| 2. **Sign in** with your account
|
| 3. **Start coding** — the model is pre-configured and ready
|
|
|
| > 💡 The IDE provides native integration with all List models, including real-time code completion, multi-file refactoring, and architectural guidance.
|
|
|
|
|
| ### Option 3: Local Deployment (Advanced)
|
|
|
| ```python
|
| from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
| model_name = "List-cloud/List-3.0-Ultra-Coder-Brain"
|
| tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
|
| model = AutoModelForCausalLM.from_pretrained(
|
| model_name,
|
| device_map="auto",
|
| trust_remote_code=True,
|
| torch_dtype="auto"
|
| )
|
|
|
| prompt = "Implement a lock-free concurrent hash map in Rust with work-stealing."
|
| inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
|
| outputs = model.generate(**inputs, max_new_tokens=4096)
|
| print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| ```
|
|
|
| > âš ï¸ Local deployment requires **8x A100 80GB** or equivalent. For most users, the **API** or **IDE** is recommended.
|
|
|
| ---
|
|
|
| ## 🎯 What List-3.0 Excels At
|
|
|
| | Domain | Capability |
|
| | :--- | :--- |
|
| | ðŸ—ï¸ **Architecture Design** | Design entire system architectures from a single prompt. Microservices, event-driven, CQRS — it knows them all. |
|
| | 🔄 **Multi-File Refactoring** | Understands 200K+ tokens of context. Refactor across hundreds of files with full dependency awareness. |
|
| | 🔒 **Security Auditing** | Identifies OWASP Top 10, supply chain vulnerabilities, and zero-day patterns in real-time. |
|
| | 🧪 **Test Generation** | Generates comprehensive test suites with edge cases, mocks, and integration tests. |
|
| | 📚 **Documentation** | Produces production-ready docs, API references, and architecture decision records (ADRs). |
|
| | 🛠**Debugging** | Traces bugs across stack traces, async boundaries, and distributed systems. |
|
|
|
|
|
|
|
| ## 🌠The List-Coder Ecosystem
|
|
|
| | Product | Description |
|
| | :--- | :--- |
|
| | [**List Coder IDE**](https://list-coder.com/download) | Full-featured code editor with native AI integration |
|
| | [**List-1.0-Ultra-Coder**](https://huggingface.co/List-cloud/List-1.0-Ultra-Coder) | Fast, lightweight model for everyday coding |
|
| | [**List-2.0-Ultra-Coder**](https://huggingface.co/List-cloud/List-2.0-Ultra-Coder) | High-performance dense model for complex tasks |
|
| | [**List-3.0-Ultra-Coder**](https://huggingface.co/List-cloud/List-3.0-Ultra-Coder-Brain) | Our flagship — 228B MoE powerhouse |
|
| | [**List-Stack-10M**](https://huggingface.co/List-cloud/List-Stack-10M) | Specialized for full-stack web development |
|
|
|
| ---
|
|
|
| ## 📜 License
|
|
|
| This model is released under the **Apache 2.0 License**. You are free to use, modify, and distribute it for both commercial and non-commercial purposes.
|
|
|
| ---
|
|
|
| ## 🔗 Connect
|
|
|
| - 🌠**Website:** [list-coder.com](https://list-coder.com/)
|
| - 🢠**Organization:** [List-cloud on HuggingFace](https://huggingface.co/List-cloud)
|
| - 📧 **Enterprise Sales:** enterprise@list-coder.com
|
|
|
| ---
|
|
|
| <div align="center">
|
|
|
| ### â Star this repo if List-3.0 helps you code faster
|
|
|
| **Built with obsession by [List Enterprise](https://list-coder.com/) — Making every developer 10x.**
|
|
|
| *© 2026 List Enterprise. All rights reserved.*
|
|
|
| </div>
|
|
|
|
|