English | δΈ­ζ–‡

SimpleTool

Parallel Decoding for Real-Time LLM Function Calling

A 4B-parameter LLM achieving 16 Hz end-to-end real-time function calling β€” fast enough to drive game AI, robotic arms, and digital humans.


SimpleTool enables real-time LLM function calling through multi-head parallel decoding. By introducing special tokens that compress redundant structured output (4–6Γ—) and enable independent generation of function name and arguments, we achieve 3–6Γ— end-to-end speedup while maintaining competitive accuracy across three application domains: games, robotic control, and digital human animation.

SimpleTool Overview

How It Works

Traditional function calling generates tokens sequentially β€” function β†’ arg1 β†’ arg2 β†’ ... β€” so latency scales linearly with output length. SimpleTool exploits two key observations:

  1. Token Redundancy: Structured outputs contain predictable tokens (brackets, parameter names, quotes) that can be compressed into single special tokens.
  2. Weak Causal Dependencies: Function arguments are largely independent of each other and can be generated in parallel.

SimpleTool Architecture

By decoding function name and arguments as parallel streams sharing the same prefix KV cache, latency drops from sum(all_token_times) to max(per_head_time). The parallel heads utilize idle compute capacity within the memory-bandwidth-bound decode phase, making parallelization nearly free.

For more details, see our arXiv paper.


Quick Start

1. Setup Environment

git clone https://github.com/HaxxorCialtion/SimpleTool.git
cd SimpleTool

Option A β€” uv (recommended)

uv venv env_rt -p python3.12
source env_rt/bin/activate
uv pip install -r requirements.txt

Option B β€” conda

conda create -n simpletool python=3.12 -y
conda activate simpletool
pip install -r requirements.txt

Option C β€” pip

python3.12 -m venv env_rt
source env_rt/bin/activate
pip install -r requirements.txt

2. Download Model

The recommended default model is RT-Qwen3-4B-AWQ-v2 (4B parameters, AWQ W4A16 quantized, v2 prompt format). All scripts default to ./models/RT-Qwen3-4B-AWQ-v2.

# HuggingFace
huggingface-cli download Cialtion/SimpleTool \
  --include "RT-Qwen3-4B-AWQ-v2/*" --local-dir ./models

# Or ModelScope
modelscope download --model cialtion/SimpleTool \
  --include "RT-Qwen3-4B-AWQ-v2/*" --local_dir ./models
All Available Models
Model Params Latency HuggingFace ModelScope
RT-Qwen2.5-0.5B-AWQ 0.5B ~30ms πŸ€— Link
RT-Qwen2.5-1.5B-AWQ 1.5B ~40ms πŸ€— Link
RT-Qwen2.5-3B-AWQ 3B ~50ms πŸ€— Link
RT-Qwen3-4B-AWQ-v2 4B ~60ms πŸ€— Link
RT-Qwen3-4B-AWQ 4B ~60ms πŸ€— Link
RT-Qwen2.5-7B-AWQ 7B ~70ms πŸ€— Link
RT-Qwen2.5-14B-AWQ 14B ~130ms πŸ€— Link
RT-Qwen3-30B-A3B-AWQ 30B(A3B) ~ πŸ€— Link

Latency measured on RTX 4090 with vLLM prefix caching. v2 models use an improved and clearer prompt format; v1 models use a former multi-head instruction header. You can also download fp16 models in huggingface or modelscope.

3. Run Benchmark (No Server Needed)

01_benchmark.py runs multi-head parallel decoding directly via vLLM across three application domains β€” game AI, robotic arm control, and digital human animation β€” with cold start / hot prefill / decode bottleneck analysis.

# v2 model (default)
python 01_benchmark.py --version v2

# v1 model
python 01_benchmark.py --version v1 --model ./models/RT-Qwen3-4B-AWQ

# Auto-detect optimal head count per scenario
python 01_benchmark.py --n-args auto

Example output:

  PARALLEL TEST (v2)

─── Game β€” Tower Defense ───
PASS  use_skill(Amiya)
  function   use_skill                                     4    OK
  arg1       Amiya                                         4    FILL
  arg2       <|null|>                                      3    NULL
  e2e=24.6ms  max_tok=4

─── Robotic Arm β€” Assembly ───
PASS  move_to(300,150,50,slow)
  function   move_to                                       4    OK
  arg1       300                                           5    FILL
  arg2       150                                           5    FILL
  arg3       500                                           5    FILL
  arg4       slow                                          3    FILL
  e2e=39.9ms  max_tok=5

─── Digital Human β€” Streamer ───
PASS  speak(welcome,cheerful)
  function   speak                                         4    OK
  arg1       Welcome!                                      4    FILL
  arg2       cheerful                                      5    FILL
  e2e=29.1ms  max_tok=5

  SUMMARY (v2)
  Accuracy       : 3/3
  Cold start avg : 56.1ms
  Hot prefill avg: 29.3ms
  E2E avg (hot)  : 31.2ms
  E2E / max_tok  : 6.7ms/tok (decode bottleneck)

The script also prints the full prompt structure and reconstructed multi-head output for inspection.

4. Start Server

02_server.py wraps the engine in a FastAPI server with CORS support. HTML game clients connect to it.

python 02_server.py

Server starts at http://localhost:8899 with two endpoints:

Endpoint Method Description
/health GET Health check, model version info
/v1/function_call POST Multi-head parallel function call

Edit MODEL_PATH and MODEL_VERSION at the top of 02_server.py to switch between v1/v2 models.

5. Test Server

With the server running, test it from another terminal:

python 03_test_server.py

This sends the same three domain scenarios (game, robotic arm, digital human) to the server API and reports accuracy, cold/hot latency, and per-head output.

# Custom server URL
python 03_test_server.py --url http://192.168.1.100:8899

# More hot rounds
python 03_test_server.py --rounds 10

6. Play Demos

Open demo HTML files in your browser. They connect to the running SimpleTool server.

Demo Description File
Pong AI vs Human paddle game demos/pong_game.html
Neon Arena Multi-AI battle shooter demos/neon_arena.html

For games with extra assets:

cd demos/neon_arena
python3 -m http.server 8080 --bind 127.0.0.1

Then open http://127.0.0.1:8080/neon_arena.html and enter your SimpleTool server URL (default: http://localhost:8899).


Project Structure

SimpleTool/
β”œβ”€β”€ 01_benchmark.py          # Step 1: Direct parallel decode benchmark
β”œβ”€β”€ 02_server.py             # Step 2: FastAPI vLLM server
β”œβ”€β”€ 03_test_server.py        # Step 3: Server API test client
β”œβ”€β”€ prompts/                 # External prompt & scenario files
β”‚   β”œβ”€β”€ v1_system.txt        #   v1 multi-head system prompt
β”‚   β”œβ”€β”€ scenarios.json       #   3 domain test scenarios
β”‚   β”œβ”€β”€ tools_game.jsonl     #   Tower defense tool definitions
β”‚   β”œβ”€β”€ tools_arm.jsonl      #   Robotic arm tool definitions
β”‚   └── tools_avatar.jsonl   #   Digital human tool definitions
β”œβ”€β”€ models/                  # Downloaded models go here
β”‚   └── RT-Qwen3-4B-AWQ-v2/ #   Default model
β”œβ”€β”€ demos/                   # HTML game clients
β”‚   β”œβ”€β”€ pong_game.html
β”‚   └── neon_arena/
β”œβ”€β”€ assets/                  # Figures for README
β”œβ”€β”€ requirements.txt
β”œβ”€β”€ simpletool-game.skill.md # Guide for building new games with AI
β”œβ”€β”€ README.md
└── README_zh.md

Build Your Own Game

Feed simpletool-game.skill.md along with this README.md into your AI coding agent (Claude Code, Codex, Antigravity, etc.) β€” the skill file covers server API spec, tool definition format, query design best practices, frontend templates, and dynamic head optimization tips, while the README helps the agent understand the overall project structure. Together they provide everything needed to vibe-code a SimpleTool-powered game.


Roadmap

  • World Simulation β€” Large-scale (1,000+ NPCs) real-time AI world simulation with < 200ms action latency per agent
  • Speculative & Multi-Token Decoding β€” Speculative decoding and multi-token prediction for further latency reduction
  • Native Windows Support β€” Windows game engine plugins and native runtime (no need for Docker or WSL)
  • Apple Ecosystem β€” Mac and iPhone on-device deployment (CoreML / Metal)
  • v3 Architecture β€” Fast thinking (real-time SimpleTool) + slow thinking (async meta-cognition) fusion
  • Embodied Intelligence β€” Virtual 3D digital humans, large-scale game engine integration demos
  • Open Source Training β€” Full training code and dataset release

Demo Videos

Video demos coming soon β€” showcasing real-time game AI, robotic arm control, and digital human animation.


Citation

@article{shi2026simpletool,
  title={SimpleTool: Parallel Decoding for Real-Time LLM Function Calling},
  author={Shi, Xiaoxin and Wan, Jiaxin and Dong, Linkang and Jiang, Wei and Liu, Yue and Huang, Zengfeng},
  journal={arXiv preprint arXiv:2603.00030},
  year={2026}
}

Contact

License

Apache 2.0

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Paper for Cialtion/SimpleTool