--- library_name: transformers license: mit base_model: Qwen/Qwen3-4B-Instruct-2507 tags: - code - agent - tool-calling - distillation - qwen3 - gguf - llama-cpp language: - en pipeline_tag: text-generation ---
LocoOperator

[![MODEL](https://img.shields.io/badge/Model-FFB300?style=for-the-badge&logo=huggingface&logoColor=white)](https://huggingface.co/LocoreMind/LocoOperator-4B) [![Blog](https://img.shields.io/badge/Blog-4285F4?style=for-the-badge&logo=google-chrome&logoColor=white)](https://locoremind.com/blog/loco-operator) [![GitHub](https://img.shields.io/badge/GitHub-181717?style=for-the-badge&logo=github&logoColor=white)](https://github.com/LocoreMind/LocoOperator) [![Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&logoColor=white)](https://colab.research.google.com/github/LocoreMind/LocoOperator/blob/main/LocoOperator_4B.ipynb)
## Introduction **LocoOperator-4B** is a 4B-parameter tool-calling agent model trained via knowledge distillation from **Qwen3-Coder-Next** inference traces. It specializes in multi-turn codebase exploration — reading files, searching code, and navigating project structures within a Claude Code-style agent loop. Designed as a local sub agent, it runs via llama.cpp at zero API cost. | | LocoOperator-4B | |:--|:--| | **Base Model** | [Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) | | **Teacher Model** | Qwen3-Coder-Next | | **Training Method** | Full-parameter SFT (distillation) | | **Training Data** | 170,356 multi-turn conversation samples | | **Max Sequence Length** | 16,384 tokens | | **Training Hardware** | 4x NVIDIA H200 141GB SXM5 | | **Training Time** | ~25 hours | | **Framework** | MS-SWIFT | ## Key Features - **Tool-Calling Agent**: Generates structured `` JSON for Read, Grep, Glob, Bash, Write, Edit, and Task (subagent delegation) - **100% JSON Validity**: Every tool call is valid JSON with all required arguments — outperforming the teacher model (87.6%) - **Local Deployment**: GGUF quantized, runs on Mac Studio via llama.cpp at zero API cost - **Lightweight Explorer**: 4B parameters, optimized for fast codebase search and navigation - **Multi-Turn**: Handles conversation depths of 3–33 messages with consistent tool-calling behavior ## Performance Evaluated on 65 multi-turn conversation samples from diverse open-source projects (scipy, fastapi, arrow, attrs, gevent, gunicorn, etc.), with labels generated by Qwen3-Coder-Next. ### Core Metrics | Metric | Score | |:-------|:-----:| | **Tool Call Presence Alignment** | **100%** (65/65) | | **First Tool Type Match** | **65.6%** (40/61) | | **JSON Validity** | **100%** (76/76) | | **Argument Syntax Correctness** | **100%** (76/76) | The model perfectly learned *when* to use tools vs. when to respond with text (100% presence alignment). Tool type mismatches are between semantically similar tools (e.g. Grep vs Read) — different but often valid strategies. ### Tool Distribution Comparison
Tool Distribution Comparison
### JSON & Argument Syntax Correctness | Model | JSON Valid | Argument Syntax Valid | |:------|:---------:|:--------------------:| | **LocoOperator-4B** | 76/76 (100%) | 76/76 (100%) | | Qwen3-Coder-Next (teacher) | 89/89 (100%) | 78/89 (87.6%) | > LocoOperator-4B achieves perfect structured output. The teacher model has 11 tool calls with missing required arguments (empty `arguments: {}`). ## Quick Start ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "LocoreMind/LocoOperator-4B" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the messages messages = [ { "role": "system", "content": "You are a read-only codebase search specialist.\n\nCRITICAL CONSTRAINTS:\n1. STRICTLY READ-ONLY: You cannot create, edit, delete, move files, or run any state-changing commands. Use tools/bash ONLY for reading (e.g., ls, find, cat, grep).\n2. EFFICIENCY: Spawn multiple parallel tool calls for faster searching.\n3. OUTPUT RULES: \n - ALWAYS use absolute file paths.\n - STRICTLY NO EMOJIS in your response.\n - Output your final report directly. Do not use colons before tool calls.\n\nENV: Working directory is /Users/developer/workspace/code-analyzer (macOS, zsh)." }, { "role": "user", "content": "Analyze the Black codebase at `/Users/developer/workspace/code-analyzer/projects/black`.\nFind and explain:\n1. How Black discovers config files.\n2. The exact search order for config files.\n3. Supported config file formats.\n4. Where this configuration discovery logic lives in the codebase.\n\nReturn a comprehensive answer with relevant code snippets and absolute file paths." } ] # prepare the model input text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=512, ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() content = tokenizer.decode(output_ids, skip_special_tokens=True) print(content) ``` ## Local Deployment For GGUF quantized deployment with llama.cpp, hybrid proxy routing, and batch analysis pipelines, refer to our [GitHub repository](https://github.com/LocoreMind/LocoOperator). ## Training Details | Parameter | Value | |:----------|:------| | Base model | Qwen3-4B-Instruct-2507 | | Teacher model | Qwen3-Coder-Next | | Method | Full-parameter SFT | | Training data | 170,356 samples | | Hardware | 4x NVIDIA H200 141GB SXM5 | | Parallelism | DDP (no DeepSpeed) | | Precision | BF16 | | Epochs | 1 | | Batch size | 2/GPU, gradient accumulation 4 (effective batch 32) | | Learning rate | 2e-5, warmup ratio 0.03 | | Max sequence length | 16,384 tokens | | Template | qwen3_nothinking | | Framework | MS-SWIFT | | Training time | ~25 hours | | Checkpoint | Step 2524 | ## Known Limitations - First-tool-type match is 65.6% — the model sometimes picks a different (but not necessarily wrong) tool than the teacher - Tends to under-generate parallel tool calls compared to the teacher (76 vs 89 total calls across 65 samples) - Preference for Bash over Read may indicate the model defaults to shell commands where file reads would be more appropriate - Evaluated on 65 samples only; larger-scale evaluation needed ## License MIT ## Acknowledgments - [Qwen Team](https://huggingface.co/Qwen) for the Qwen3-4B-Instruct-2507 base model - [MS-SWIFT](https://github.com/modelscope/ms-swift) for the training framework - [llama.cpp](https://github.com/ggerganov/llama.cpp) for efficient local inference - [Anthropic](https://www.anthropic.com/) for the Claude Code agent loop design that inspired this work