Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
32
10
94
Massimo Roberto Scamarcia
PRO
mrs83
Follow
jorgemunozl's profile picture
armandoenlared's profile picture
AtomicRobot's profile picture
62 followers
Β·
78 following
mrs83
massimoscamarcia
ethicalabs.bsky.social
AI & ML interests
Natural Language Processing, Text Generation, Question Answering, Data Augmentation, Knowledge Transfer, Chain-of-Thought, ResearchOps, MLOps
Recent Activity
updated
a model
2 days ago
ethicalabs/Echo-DSRN-114M-v0.1.2-Base
updated
a model
2 days ago
ethicalabs/Echo-DSRN-114M-v0.1.2
reacted
to
qgallouedec
's
post
with π
3 days ago
TRL v1.3 ships day-one training support for Qwen 3.6 π The new Qwen 3.6 family (`Qwen/Qwen3.6-27B`, `Qwen/Qwen3.6-35B-A3B`) reuses the Qwen3.5-MoE architecture but ships a slightly different chat template, so we updated the stack end-to-end: new training template with `{% generation %}` markers, tool-call response schema routing, tiny test models for the VLM matrix. SFT with assistant-only loss works out of the box: ```python from trl import SFTConfig, SFTTrainer trainer = SFTTrainer( model="Qwen/Qwen3.6-27B", args=SFTConfig(assistant_only_loss=True), train_dataset=dataset, ) trainer.train() ``` So does GRPO tool-calling β just hand `tools=[...]` to `GRPOTrainer`. v1.3 also brings a new experimental TPO trainer (Triple Preference Optimization), speculative decoding in `trl vllm-serve` (Qwen3 MTP / Eagle3 drafts), 12 more KTO β DPO alignment PRs (KTO promotion to stable is now in reach), three more `{% generation %}` chat templates (Gemma/Gemma 2, Phi-3, GLM-4-MoE), and a chunky SFT entropy bug fix. Full release notes: https://github.com/huggingface/trl/releases/tag/v1.3.0
View all activity
Organizations
mrs83
's buckets
5
Sort:Β Recently updated
mrs83/huggingface-static-62f6b6-bucket
0 Bytes
mrs83/huggingface-static-0eb09e-bucket
0 Bytes
mrs83/huggingface-static-baf14e-bucket
0 Bytes
mrs83/huggingface-static-4d2f8c-bucket
65.1 kB
mrs83/echo-pizza-sft-bucket
0 Bytes