HuggingFaceTB/nanowhale-100m-base
Text Generation โข 0.1B โข Updated โข 288 โข 11
Exploring smol models (for text, vision and video) and high quality web and synthetic datasets
Qwen/Qwen3.6-27B, Qwen/Qwen3.6-35B-A3B) reuses the Qwen3.5-MoE architecture but ships a slightly different chat template, so we updated the stack end-to-end: new training template with {% generation %} markers, tool-call response schema routing, tiny test models for the VLM matrix.from trl import SFTConfig, SFTTrainer
trainer = SFTTrainer(
model="Qwen/Qwen3.6-27B",
args=SFTConfig(assistant_only_loss=True),
train_dataset=dataset,
)
trainer.train()tools=[...] to GRPOTrainer.trl vllm-serve (Qwen3 MTP / Eagle3 drafts), 12 more KTO โ DPO alignment PRs (KTO promotion to stable is now in reach), three more {% generation %} chat templates (Gemma/Gemma 2, Phi-3, GLM-4-MoE), and a chunky SFT entropy bug fix.from trl.experimental.ssd import SSDConfig, SSDTrainer
trainer = SSDTrainer(
model="Qwen/Qwen3-4B-Instruct",
args=SSDConfig(temperature=0.6, top_k=20, top_p=0.95),
train_dataset=dataset,
)
trainer.train()use_transformers_paged, and key fixes for VLM response parsing.pip install --upgrade trl