Roberto Tacconelli PRO

robtacconelli

AI & ML interests

None yet

Recent Activity

updated a dataset about 8 hours ago
robtacconelli/smollm2-135M-GGUF
reacted to their post with ๐Ÿ”ฅ about 10 hours ago
๐Ÿ† Nacrith: a 135M model that out-compresses everything on natural language What if a tiny LM could compress english text better than _every_ compressor out there โ€” classical or neural, small or large? Nacrith pairs SmolLM2-135M with an ensemble of online predictors and high-precision arithmetic coding. What's inside The standard LLM+arithmetic coding approach wastes ~75% of CDF precision on large vocabularies. Our CDF-24 fix alone recovers 0.5 bpb. On top: a token N-gram that skips the GPU on predictable tokens, an adaptive bias head, llama.cpp backend (7ร— faster than PyTorch), multi-GPU parallel compression, and a binary file format (NC06) โ€” the first LLM-based binary compressor we know of. Runs on a GTX 1050 Ti. ~500 MB weights, ~1.2 GB VRAM per worker. ๐Ÿ’ป Code: https://github.com/robtacconelli/Nacrith-GPU โญ Space: https://huggingface.co/spaces/robtacconelli/Nacrith-GPU ๐Ÿ“„ Paper: https://huggingface.co/papers/2602.19626 Try it, break it, share your results โ€” all feedback welcome. โญ on the repo appreciated! Results across all systems we tested: - alice29.txt โ†’ 0.918 bpb (โˆ’44% vs CMIX, โˆ’20% vs ts_zip) โ€” below the 2nd-order Shannon entropy bound - enwik8 (100 MB) โ†’ 0.9389 bpb (โˆ’8% vs FineZip/LLMZip's 8B model, โˆ’15% vs ts_zip) - Unseen text โ†’ 0.723 bpb on a doc published after training cutoff โ€” no memorization, 26% better than FineZip/LLMZip on the same model SmolLM2-135M by https://huggingface.co/HuggingFaceTB
View all activity

Organizations

None yet