Text Generation
Transformers
Safetensors
English
Chinese
neuronspark
spiking-neural-network
snn
ponder-net
sft
chat
thinking
custom-architecture
conversational
custom_code
Instructions to use Brain2nd/NeuronSpark-V3-1.1B-SFT with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Brain2nd/NeuronSpark-V3-1.1B-SFT with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Brain2nd/NeuronSpark-V3-1.1B-SFT", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("Brain2nd/NeuronSpark-V3-1.1B-SFT", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Brain2nd/NeuronSpark-V3-1.1B-SFT with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Brain2nd/NeuronSpark-V3-1.1B-SFT" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Brain2nd/NeuronSpark-V3-1.1B-SFT", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Brain2nd/NeuronSpark-V3-1.1B-SFT
- SGLang
How to use Brain2nd/NeuronSpark-V3-1.1B-SFT with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Brain2nd/NeuronSpark-V3-1.1B-SFT" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Brain2nd/NeuronSpark-V3-1.1B-SFT", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Brain2nd/NeuronSpark-V3-1.1B-SFT" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Brain2nd/NeuronSpark-V3-1.1B-SFT", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Brain2nd/NeuronSpark-V3-1.1B-SFT with Docker Model Runner:
docker model run hf.co/Brain2nd/NeuronSpark-V3-1.1B-SFT
NeuronSpark V3 — 1.1B SFT (step 2000)
Architecture: SNN (Spiking Neural Network) decoder with PonderNet-style adaptive K time-step routing. Custom model, not a transformer variant.
- Hidden dim D = 1024 · K_max = 12 · 24 layers
- ~1.24B parameters (model.safetensors ≈ 2.47 GB bf16)
- Tokenizer vocab = 128387 (multilingual)
- SFT step 2000, base = pretrain step 108000
- Optimizer: AdamW + DeepSpeed ZeRO-2 (8 ranks)
- SFT data: uniform-bucketed [1000, 2048] tokens, ~35% thinking / 65% non-thinking, multi-turn ZH/EN balanced (OpenThoughts, QwQ, Congliu, smoltalk2, WildChat-1M, no_robots)
Load (inference)
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Brain2nd/NeuronSpark-V3-1.1B-SFT")
model = AutoModelForCausalLM.from_pretrained(
"Brain2nd/NeuronSpark-V3-1.1B-SFT",
trust_remote_code=True,
).cuda().eval()
msgs = [{"role": "user", "content": "What is the capital of France?"}]
text = tokenizer.apply_chat_template(msgs, tokenize=False,
add_generation_prompt=True,
enable_thinking=True)
ids = tokenizer(text, return_tensors="pt").input_ids.cuda()
out = model.generate_cached(input_ids=ids, max_new_tokens=512,
temperature=0.7, top_p=0.9, top_k=50, repetition_penalty=1.1)
print(tokenizer.decode(out[0, ids.shape[1]:], skip_special_tokens=False))
Resume SFT
deepspeed/ contains 8-rank ZeRO-2 sharded optimizer state. Use:
deepspeed --num_gpus=8 train_sft.py \
--deepspeed_config ds_config.json \
--resume ./ckpt_step2000
Files
model.safetensors |
HF-format bf16 weights |
config.json / generation_config.json |
model config |
tokenizer.json / tokenizer_config.json |
tokenizer (vocab=128387) |
modeling_neuronspark.py / configuration_neuronspark.py |
custom arch (trust_remote_code=True) |
deepspeed/ |
ZeRO-2 optimizer state (8 ranks) |
training_state.pth |
step / epoch / tokens_seen |
zero_to_fp32.py |
DeepSpeed helper |
- Downloads last month
- 47