AOS-COS-v1.0 โ ArmadaOS Chief of Staff
Custom fine-tuned model for the ArmadaOS Chief of Staff agent. Built on Qwen3.5-9B with 262K native context window.
Model Details
- Base Model: Qwen3.5-9B (262K context, extensible to 1M with YaRN)
- Fine-tuning Method: bf16 LoRA (rank 16, alpha 16)
- Training Data: 315 Gold Standard examples across 7 capability categories
- SFT: 220 examples (155 trajectory + 65 function calling)
- DPO: 95 preference pairs
- Quantization: Q4_K_M (5.02 bits per weight)
- File Size: 5.3 GB
Capability Categories
| Category | Description | Examples |
|---|---|---|
| A | Boot & Identity | 50 |
| B | Error Correction Protocol | 15 |
| C | 10-Layer Memory System | 50 |
| D | Operational Cadence | 50 |
| E | Governance & Constitution | 50 |
| F | Failure & Antifragility | 50 |
| G | Decision & Communication | 50 |
Usage with Ollama
ollama run ArmadaOS/AOS-COS-v1.0
Training Infrastructure
- GPU: NVIDIA RTX A6000 (48GB) / L40 (48GB)
- Platform: RunPod
- Framework: Unsloth + TRL
- Training Time: ~45 min SFT + ~20 min DPO
Version History
- v1.0 โ Initial release. SFT + DPO on 315 Gold Standard v5.3 examples.
Built by ArmadaOS. Compound, Never Lose.
- Downloads last month
- 2
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support