| --- |
| |
| base_model: |
|
|
| - Qwen/Qwen3-0.6B |
| - MultiverseComputing/LittleLamb-0.3B |
| library_name: transformers |
| license: apache-2.0 |
|
|
| --- |
| <div align="center"> |
|
|
| # LittleLamb 0.3B Tool-Calling |
|
|
| ### Powered by CompactifAI |
|
|
| [](https://opensource.org/licenses/Apache-2.0) |
| [](https://huggingface.co/MultiverseComputingCAI/LittleLamb-ToolCalling) |
| [](https://discord.gg/cGas9uStqp) |
|
|
| **Tiny Model** · **50% Compressed** · **Native Tool Calling** · **Thinking & Non-Thinking Modes** |
|
|
| </div> |
|
|
| --- |
|
|
| ## Table of Contents |
|
|
| - [Highlights](#highlights) |
| - [Model Overview](#model-overview) |
| - [Key Characteristics](#key-characteristics) |
| - [Quick Start](#quick-start) |
| - [What's New in LittleLamb 0.3B Tool-Calling](#whats-new-in-littlelamb-03b-tool-calling) |
| - [Tool Calling](#tool-calling) |
| - [Dual-Mode Inference (Thinking / Non-Thinking)](#dual-mode-inference-thinking--non-thinking) |
| - [Training & Fine-Tuning](#training--fine-tuning) |
| - [Architecture](#architecture) |
| - [Evaluation & Benchmarks](#evaluation--benchmarks) |
| - [Languages](#languages) |
| - [Intended Use](#intended-use) |
| - [Safety & Limitations](#safety--limitations) |
| - [Model Information](#model-information) |
| - [Citation](#citation) |
|
|
| --- |
|
|
| ## Model Overview |
|
|
| **LittleLamb 0.3B Tool-Calling** is a **tool-calling–optimized variant** of [LittleLamb 0.3B](https://huggingface.co/MultiverseComputingCAI/LittleLamb-ToolCalling) at **290M parameters**, developed based on [Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) by **Multiverse Computing**. Built on top of the CompactifAI-compressed LittleLamb base, this variant has been additionally fine-tuned for **function calling, structured outputs, and agentic workflows**. It supports **thinking and non-thinking modes** while adding native tool-use support in a sub-300M-parameter footprint. |
|
|
| --- |
|
|
| ## Key Characteristics |
|
|
|
|
| | Characteristic | Description | |
| | ---------------- | ---------------------------------------------------------------------------------------------------------------- | |
| | Base model | [Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) (0.6B params, 0.44B non-embedding; open-weight, Apache 2.0) | |
| | **Tool calling** | Native support for function calling with defined schemas and structured outputs | |
| | **Parameters** |290M total parameters after CompactifAI compression (50% compression rate from base 0.6B) | |
| | **Architecture** | Decoder-only Transformer (Qwen3 family) | |
| | **Compression** | CompactifAI (proprietary) | |
| | **Languages** | English. Spanish is yet to be tested for tool-calling capabilities. | |
| | **Modes** | Thinking (`enable_thinking=True`) and non-thinking (`enable_thinking=False`) via chat template | |
|
|
|
|
| --- |
|
|
| ## Quick Start |
|
|
| This model can be loaded with the **Transformers** library. Requires `transformers>=4.51.0` for Qwen3 architecture support. |
|
|
| ```python |
| from transformers import AutoModelForCausalLM, AutoTokenizer |
| |
| model_id = "MultiverseComputingCAI/LittleLamb-ToolCalling" |
| tokenizer = AutoTokenizer.from_pretrained(model_id) |
| model = AutoModelForCausalLM.from_pretrained( |
| model_id, |
| torch_dtype="auto", |
| device_map="auto", |
| ) |
| |
| messages = [{"role": "user", "content": "Hello!"}] |
| text = tokenizer.apply_chat_template( |
| messages, |
| tokenize=False, |
| add_generation_prompt=True, |
| enable_thinking=True, |
| ) |
| inputs = tokenizer([text], return_tensors="pt").to(model.device) |
| output_ids = model.generate(**inputs, max_new_tokens=256)[0] |
| response = tokenizer.decode( |
| output_ids[len(inputs.input_ids[0]) :], skip_special_tokens=True |
| ) |
| print(response) |
| ``` |
|
|
| For OpenAI-compatible serving, use a stack that supports Qwen3 reasoning and tool calling (e.g. recent **vLLM** or **SGLang** with Qwen3 parsers); see the [Qwen3-0.6B model card](https://huggingface.co/Qwen/Qwen3-0.6B) for deployment examples. |
|
|
| --- |
|
|
| ## What's New in LittleLamb 0.3B Tool-Calling |
|
|
| ### Summary |
|
|
| - **Tool-calling–optimized** variant of LittleLamb 0.3B, fine-tuned for function calling and structured outputs. |
| - **Ultra-compact** at 290M parameters, suitable for edge and on-device deployment with agentic capabilities. |
| - **Developed based on [Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B)** with **CompactifAI** compression (~50% parameter reduction vs. base non-embedding count). |
|
|
|
|
| --- |
|
|
| ## Tool Calling |
|
|
| LittleLamb 0.3B Tool-Calling supports **native tool use** and is designed for: |
|
|
| - **Function calling** with defined schemas |
| - **Structured outputs** |
| - **Agentic operations** (e.g. browser tasks, code execution where supported) |
|
|
| The model can detect when to invoke tools, emit structured JSON tool calls, and consume tool outputs to continue generation. Tool-calling behavior follows Qwen3-style schemas. |
|
|
| ### Example Tool Call |
|
|
| ```json |
| { |
| "name": "get_weather", |
| "arguments": { |
| "city": "Paris", |
| "date": "2026-02-10" |
| } |
| } |
| ``` |
|
|
| --- |
|
|
| ## Dual-Mode Inference (Thinking / Non-Thinking) |
|
|
| LittleLamb 0.3B Tool-Calling inherits Qwen3's dual-mode capability, supporting seamless switching between **thinking mode** (for complex reasoning) and **non-thinking mode** (for efficient general-purpose dialogue). |
|
|
| The model generates internal reasoning in Qwen3's thinking format (see the Qwen3 chat template) before producing the final response. Use this for tasks requiring multi-step reasoning, math, or code generation. |
|
|
| Set `enable_thinking=False` for lower-latency dialogue without explicit chain-of-thought in the template. Follow the **sampling parameters** recommended in the [Qwen3-0.6B model card](https://huggingface.co/Qwen/Qwen3-0.6B) for each mode. |
|
|
| --- |
|
|
| ## Training & Fine-Tuning |
|
|
| ### Base Model: Qwen3-0.6B |
|
|
| The base model [Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) is a causal language model from the Qwen3 family, supporting thinking/non-thinking modes. See the [Qwen3 technical report](https://arxiv.org/abs/2505.09388) for details. |
|
|
| ### CompactifAI Compression & Tool-Calling Fine-Tuning |
|
|
| - **Compression:** CompactifAI was applied to produce a smaller, efficient model (~0.3B parameters) while aiming to preserve reasoning capabilities. |
| - **Tool-calling fine-tuning:** This variant includes additional fine-tuning for function calling and structured outputs on top of the compressed LittleLamb base. |
|
|
| --- |
|
|
| ## Architecture |
|
|
| ### Model Specifications |
|
|
|
|
| | Field | Value | |
| | ---------------- | ----------------------------------------------------------------------- | |
| | Base model | [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) (0.6B params) | |
| | Total parameters | 290M dense | |
|
|
|
|
| --- |
|
|
| ## Evaluation & Benchmarks |
|
|
| ### Evaluation Methodology |
|
|
| Benchmark scores were obtained with the following setups. Methodology varies by benchmark family. |
|
|
| For **LittleLamb 0.3B Tool-Calling** and **Qwen3-0.6B (base)**, benchmark runs are reported under both **thinking** and **non-thinking** chat modes using the sampling settings recommended in the [Qwen3-0.6B model card](https://huggingface.co/Qwen/Qwen3-0.6B). |
|
|
| #### MMLU-Pro, GPQA Diamond, IFBench |
|
|
| - **Evaluation framework**: [Nemo-skills](https://github.com/NVIDIA/NeMo-Skills) |
| - **Inference library**: vLLM 0.18.0 |
| - **Thinking mode** (`enable_thinking=True`, per Qwen3-0.6B instruct): temperature = 0.6, top_p = 0.95, top_k = 20, min_p = 0 |
| - **Non-thinking mode** (`enable_thinking=False`, per Qwen3-0.6B instruct): temperature = 0.7, top_p = 0.8, top_k = 20, min_p = 0 |
| |
| #### BFCL v4, τ²-Bench |
| |
| - **Evaluation framework**: [EvalScope](https://github.com/EvalScope/EvalScope) |
| - **Inference library**: vLLM 0.18.0 |
| - **Thinking mode** (`enable_thinking=True`, per Qwen3-0.6B instruct): temperature = 0.6, top_p = 0.95, top_k = 20, min_p = 0 |
| - **Non-thinking mode** (`enable_thinking=False`, per Qwen3-0.6B instruct): temperature = 0.7, top_p = 0.8, top_k = 20, min_p = 0 |
| - Results of `functiongemma-270m-it` for BFCL v4 were extracted from [Google's model card](https://huggingface.co/google/functiongemma-270m-it) (09/04/2026) |
| |
| |
| ### Quantitative Results |
| |
| Reported numbers use the methodology described above. |
| |
| #### Thinking mode |
| |
| |
| | Benchmark | functiongemma-270m-it | Qwen3-0.6B (think) | LittleLamb-TC 0.3B (think) | |
| | --------------------------- | --------------------- | ------------------ | -------------------------- | |
| | IFBench | 12.00 | 23.88 | 20.00 | |
| | GPQA Diamond | 2.53 | 29.59 | 27.47 | |
| | MMLU-Pro | 0.42 | 38.27 | 28.74 | |
| | τ²-Bench | 5.05 | 19.59 | 18.70 | |
| | BFCL Simple | 61.60 | 72.73 | 72.36 | |
| | BFCL Multiple | 63.50 | 85.00 | 89.50 | |
| | BFCL Parallel | 39.00 | 70.00 | 70.00 | |
| | BFCL Parallel Multiple | 29.50 | 71.50 | 68.00 | |
| | BFCL Live Simple | 36.20 | 63.18 | 64.34 | |
| | BFCL Live Multiple | 25.70 | 56.41 | 60.78 | |
| | BFCL Live Parallel | 22.90 | 50.00 | 62.50 | |
| | BFCL Live Parallel Multiple | 20.80 | 50.00 | 45.83 | |
| | BFCL Relevance | 61.10 | 75.00 | 75.00 | |
| | BFCL Irrelevance | 73.70 | 84.58 | 77.92 | |
| | **BFCL v4** | 27.03 | 54.08 | 51.55 | |
| |
| |
| #### Non-thinking mode |
| |
| |
| | Benchmark | functiongemma-270m-it | Qwen3-0.6B (no think) | LittleLamb-TC 0.3B (no think) | |
| | --------------------------- | --------------------- | --------------------- | ----------------------------- | |
| | IFBench | 12.00 | 23.80 | 21.00 | |
| | GPQA Diamond | 2.53 | 27.77 | 27.37 | |
| | MMLU-Pro | 0.42 | 25.72 | 23.71 | |
| | τ²-Bench | 5.05 | 15.50 | 26.67 | |
| | BFCL Simple | 61.60 | 12.73 | 70.55 | |
| | BFCL Multiple | 63.50 | 20.00 | 80.50 | |
| | BFCL Parallel | 39.00 | 18.00 | 71.50 | |
| | BFCL Parallel Multiple | 29.50 | 30.50 | 70.50 | |
| | BFCL Live Simple | 36.20 | 4.65 | 62.02 | |
| | BFCL Live Multiple | 25.70 | 11.02 | 50.43 | |
| | BFCL Live Parallel | 22.90 | 0.00 | 43.75 | |
| | BFCL Live Parallel Multiple | 20.80 | 12.50 | 29.17 | |
| | BFCL Relevance | 61.10 | 12.50 | 75.00 | |
| | BFCL Irrelevance | 73.70 | 97.50 | 87.50 | |
| | **BFCL v4** | 27.03 | 29.17 | 50.51 | |
| |
| |
|  |
| |
| |
| BFCL V4 is the de facto industry standard for evaluating function-calling (tool-use) capability. It tests whether models can correctly generate structured function calls in response to user queries, across simple single-call scenarios, parallel calls, multi-turn conversations, and complex agentic workflows. |
| |
| ### Quantitative Results (Inference Performance) |
| |
| #### Metrics reported |
| - **System Output Throughput (higher is better)**: Mean output tokens per second across all concurrent requests over the benchmarking phase. |
| - **End-to-End Latency per Query (lower is better):** Median end-to-end response time for each query from the time the query is sent. |
| - **Output Speed per Query (higher is better):** Median output tokens per second after the first token is received for each query. |
| - **Time to first token (TTFT) (lower is better):** Median |
| - **Estimated Peak Memory Usage (lower is better):** KV cache utilization is monitored during the phase and we estimate memory usage as follows: $model\_ weights_{gb} + kv\_ cache_{usage\_pct} × (nvml\_used_{gb} − model\_ weights_{gb})$ |
| - **Model weights (lower is better):** |
| |
| |
| |
| #### Performance evaluation conditions |
| |
| Our performance evaluation follows the spirit of [Artificial Analysis](https://artificialanalysis.ai/methodology/system-load-test). |
| |
| - **Inference library**: vLLM 0.18.0 |
| - **Monitoring libraries**: GuideLLM 0.6.0, nvidia-ml-py 13.590.48 |
| - **Hardware**: 1× NVIDIA L4 GPU |
| - **Conditions**: concurrency=16 |
| - **Phase duration**: Each phase lasts 3 minutes (excluding ramp-up and cool-down periods). |
| - **Workload shape**: 1,000 input tokens and 1,000 output tokens per query. |
| - **Streaming**: Benchmarking is conducted with streaming enabled. |
| |
| |
| **Summary of improvements:** LittleLamb shows a slight improvement in performance with respect to the original Qwen Model. This is expected as for such small models, VRAM usage is dominated by KV cache and not model weights. |
| |
|  |
| |
| |
| |
| --- |
| |
| ## Languages |
| |
| - **Primary languages**: English. Spanish is yet to be tested for tool-calling capabilities. |
| |
| --- |
| |
| ## Intended Use |
| |
| ### Recommended Use Cases |
| |
| Aligned with [Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) use cases, with the added benefit of tool-calling capabilities in a smaller footprint suitable for edge and on-device deployment: |
| |
| - **Function calling and agentic workflows** in resource-constrained environments |
| - **On-device and edge inference** where memory and compute are constrained |
| - **Structured output generation** (JSON, schemas) |
| - **Reasoning tasks** with configurable thinking/non-thinking modes |
| - **Chatbots and virtual assistants** with tool integration |
| |
| ### Out-of-Scope Uses |
| |
| - Harmful, illegal, or deceptive content generation |
| - Impersonation of real individuals without consent |
| - High-risk decision-making without human oversight |
| - Surveillance or tracking of individuals |
| - Any use that violates applicable laws or regulations |
| |
| --- |
| |
| ## Safety & Limitations |
| |
| ### Known Limitations |
| |
| - **Model scale:** At ~0.3B parameters, this is an ultra-compact model. Several frontier-scale benchmarks (GDPval-AA, Terminal-Bench Hard, AA-LCR, CritPt) produce no discriminative signal at this model size, as the base Qwen3-0.6B itself scores near zero on them. |
| - **Thinking mode:** Performance differs substantially between thinking and non-thinking modes across benchmarks. Users should evaluate both modes for their specific use case. |
| - **Tool calling:** While fine-tuned for tool use, accuracy and reliability of tool calls should be validated for production use cases given the model's compact size. |
| |
| ### Recommendations |
| |
| - Use human oversight for critical applications |
| - Perform task-specific evaluation prior to deployment |
| - Test both thinking and non-thinking modes for your use case |
| - Validate tool-call outputs before executing them in production |
| |
| --- |
| |
| ## Model Information |
| |
| |
| | Field | Value | |
| | ------------ | --------------------------------------------------------------------------- | |
| | Model name | LittleLamb Tool-Calling | |
| | Based on | [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) | |
| | Version | 2604 | |
| | Release date | 28/04/2026 | |
| | Developed by | Multiverse Computing | |
| | License | Apache 2.0 | |
| | Contact | [business@multiversecomputing.com](mailto:business@multiversecomputing.com) | |
| |
| |
| --- |
| |
| ## Citation |
| |
| If you use this model, please cite the base model and this variant: |
| |
| ```bibtex |
| @misc{qwen3technicalreport, |
| title = {Qwen3 Technical Report}, |
| author = {Qwen Team}, |
| year = {2025}, |
| eprint = {2505.09388}, |
| archivePrefix = {arXiv}, |
| primaryClass = {cs.CL}, |
| url = {https://arxiv.org/abs/2505.09388} |
| } |
| @misc{littlelambtc, |
| title = {LittleLamb Tool-Calling: Compressed Qwen3-0.6B with Tool-Use via CompactifAI}, |
| author = {Multiverse Computing}, |
| year = {2026}, |
| url = {https://huggingface.co/MultiverseComputingCAI/LittleLamb-ToolCalling}, |
| note = {Model developed based on Qwen/Qwen3-0.6B using CompactifAI technology, fine-tuned for tool calling} |
| } |
| ``` |
| |
| **Built by [Multiverse Computing](https://www.multiversecomputing.com)** · [Report an issue](https://huggingface.co/MultiverseComputingCAI/LittleLamb-ToolCalling/discussions) · [Discord](https://discord.gg/cGas9uStqp) |