Instructions to use AxionLab-Co/NanoThink-5M with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use AxionLab-Co/NanoThink-5M with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="AxionLab-Co/NanoThink-5M")# Load model directly from transformers import NanoThink model = NanoThink.from_pretrained("AxionLab-Co/NanoThink-5M", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use AxionLab-Co/NanoThink-5M with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "AxionLab-Co/NanoThink-5M" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "AxionLab-Co/NanoThink-5M", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/AxionLab-Co/NanoThink-5M
- SGLang
How to use AxionLab-Co/NanoThink-5M with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "AxionLab-Co/NanoThink-5M" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "AxionLab-Co/NanoThink-5M", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "AxionLab-Co/NanoThink-5M" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "AxionLab-Co/NanoThink-5M", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use AxionLab-Co/NanoThink-5M with Docker Model Runner:
docker model run hf.co/AxionLab-Co/NanoThink-5M
Emergent Properties
"Intelligence can be mimicked at small scale — but not yet achieved."
The way I see this is that the property of emergent behavior increases with parameter count. 1B -> 7B -> 14B -> 30B -> 70B -> 100B
Maybe the way to solve small scale intelligence is to reduce the problem set. Allow more room for emergence. Something not trying to achieve human intelligence, but something closer to actual lab mice.
I like that perspective, especially the idea of reducing the problem space.
But I’d push back a bit on my own earlier statement: I don’t think small-scale intelligence is just mimicry anymore.
It seems more accurate to say it’s constrained rather than absent.
While Emergent behavior becomes more pronounced with scale, approaches like Reinforcement Learning and Self Play suggest that meaningful behavior can still emerge in smaller systems given the right structure and environment.
So instead of “not yet achieved”, I’d frame it as “achieved within limits”.
I was having a chat with my local LLM and think this is relevant here.
"The pattern is structural parallelism: both mathematics and physics use axiomatic or theoretical frameworks where solutions (proofs or unified theories) emerge from constraints, and the absence of a current solution does not imply impossibility—it only indicates incomplete knowledge or unresolved constraints."
Another way to look at the problem space could be (metaphorically) a large house where the problem space is always the same, but a mouse that can go anywhere in the house, even through the walls.