view post Post 51 In celebration of the new storage graph feature on the Hub, here's mine 😊 :Post inspired by @ZennyKenny See translation
view post Post 110 I recently created my first storage bucket to store experiment data of my performance analysis of 15 tokenizers across 20 languages.The setup is simple enough for a new product and can be scalable depending on the use-case 🤗 .Bucket: https://huggingface.co/buckets/AINovice2005/tokenizer-benchmarkgithub gist: https://gist.github.com/ParagEkbote/b3877f667f84cbb9a27bdaca94ba662aArticle: https://medium.com/@paragekbote23/one-sentence-fifteen-tokenizers-a-tokenizer-benchmarking-pipeline-with-hf-storage-buckets-2e59790276fd See translation
CICFlow-Meter ModernBERT LoRA A collection of ModernBERT‑based binary classifiers fine‑tuned with LoRA adapters at ranks 4, 8, and 16 for efficient network flow analysis. AINovice2005/ModernBERT-base-lora-cicflow-1m-r16 Fill-Mask • Updated 2 days ago • 62 AINovice2005/ModernBERT-base-lora-cicflow-1m-r8 Fill-Mask • Updated 2 days ago • 59 AINovice2005/ModernBERT-base-lora-cicflow-1m-r4 Fill-Mask • Updated 2 days ago • 22
SmolLM-Smashed: Tiny Giants, Optimized for Speed SmolLM-Smashed is a collection of optimized language models. Each model is quantized and compiled for maximum efficiency while preserving performance. AINovice2005/SmolLM3-3B-smashed Text Generation • Updated Oct 4, 2025 • 5 AINovice2005/SmolLM2-1.7B-smashed Text Generation • Updated Oct 4, 2025 • 7 AINovice2005/SmolLM2-360M-smashed Text Generation • Updated Oct 4, 2025 • 5 AINovice2005/SmolLM-360M-smashed Text Generation • Updated Oct 4, 2025 • 5
CICFlow-Meter ModernBERT LoRA A collection of ModernBERT‑based binary classifiers fine‑tuned with LoRA adapters at ranks 4, 8, and 16 for efficient network flow analysis. AINovice2005/ModernBERT-base-lora-cicflow-1m-r16 Fill-Mask • Updated 2 days ago • 62 AINovice2005/ModernBERT-base-lora-cicflow-1m-r8 Fill-Mask • Updated 2 days ago • 59 AINovice2005/ModernBERT-base-lora-cicflow-1m-r4 Fill-Mask • Updated 2 days ago • 22
SmolLM-Smashed: Tiny Giants, Optimized for Speed SmolLM-Smashed is a collection of optimized language models. Each model is quantized and compiled for maximum efficiency while preserving performance. AINovice2005/SmolLM3-3B-smashed Text Generation • Updated Oct 4, 2025 • 5 AINovice2005/SmolLM2-1.7B-smashed Text Generation • Updated Oct 4, 2025 • 7 AINovice2005/SmolLM2-360M-smashed Text Generation • Updated Oct 4, 2025 • 5 AINovice2005/SmolLM-360M-smashed Text Generation • Updated Oct 4, 2025 • 5