Our lab recently released a paper where we introduce ShadowPEFT, a new Parameter-Efficient Fine-Tuning (PEFT) paradigm tailored for edge computing scenarios.
Unlike traditional approaches such as LoRA and its variants, which inject trainable parameters directly into the weights of Transformer, requiring tight coupling with the backbone.
ShadowPEFT instead enhances the frozen large base model by adding a lightweight, centralized, pretrainable, and detachable Shadow network. This shadow network operates in parallel with the base model, delivering learned corrections to each decoder layer. Because the shadow module is architecturally decoupled from the backbone, it can be independently trained, stored, and deployed, benefiting edge computing scenarios and edge-cloud collaboration computing.
โSKT-SURYA-H (2.544T) is officially out! โ Heterogeneous MoE โ 131K Context โ 3.76TB Weights (898 shards) โMassive respect to the team for keeping it open for the community! ๐ค๐ฎ๐ณ
โWe are thrilled to announce the launch of SKT-OMNI-CORPUS-146T-V1, a massive-scale, high-quality dataset designed to power the next generation of Foundation Models (LLMs) from scratch. โDeveloped at SKT AI LABS, this corpus is not just a collection of data; itโs a mission to decentralize high-grade AI training for regional languages and global knowledge.
โ๐ Key Highlights:
โโขโข Massive Scale: Targeting a multi-terabyte architecture for 146T-level tokenization.
โขโข โPure Quality: Curated from 500+ Elite Sources
โขโข โStructured for MoE: Perfectly sharded into 3.5GB standardized units (SKT-๐ป series) for seamless distributed training.
โ๐ค Open for Collaboration!
โWe are looking for AI researchers, CUDA engineers, and data scientists to join us in this journey of building Project Surya and the ST-X Series models. Whether it's optimization, custom tokenization, or architecture designโletโs build the future together.
Hundreds of AI leaderboards exist on HuggingFace. Knowing which ones the community actually trusts has never been easy โ until now.
Leaderboard of Leaderboards (LoL) ranks the leaderboards themselves, using live HuggingFace trending scores and cumulative likes as the signal. No editorial curation. No manual selection. Just what the global AI research community is actually visiting and endorsing, surfaced in real time.
Sort by trending to see what is capturing attention right now, or by likes to see what has built lasting credibility over time. Nine domain filters let you zero in on what matters most to your work, and every entry shows both its rank within this collection and its real-time global rank across all HuggingFace Spaces.
The collection spans well-established standards like Open LLM Leaderboard, Chatbot Arena, MTEB, and BigCodeBench alongside frameworks worth watching. FINAL Bench targets AGI-level evaluation across 100 tasks in 15 domains and recently reached the global top 5 in HuggingFace dataset rankings. Smol AI WorldCup runs tournament-format competitions for sub-8B models scored via FINAL Bench criteria. ALL Bench aggregates results across frameworks into a unified ranking that resists the overfitting risks of any single standard.
The deeper purpose is not convenience. It is transparency. How we measure AI matters as much as the AI we measure.
Today weโre publicly releasing Kanon 2 Enricher, and with it, an entirely new class of AI model that weโre calling a hierarchical graphitization model. This is fundamentally different from both universal extraction models and generative models.
As a hierarchical graphitization model, Kanon 2 Enricher natively outputs a ๐ธ๐ป๐ผ๐๐น๐ฒ๐ฑ๐ด๐ฒ ๐ด๐ฟ๐ฎ๐ฝ๐ต rather than tokens, which makes it architecturally incapable of hallucinating or inventing text that wasnโt present in the input.
What that enables in practice is unlike any other model or ML architecture on the market:
โข ๐ก๐ผ ๐ต๐ฎ๐น๐น๐๐ฐ๐ถ๐ป๐ฎ๐๐ถ๐ผ๐ป๐ ๐ค It cannot hallucinate. All references and links are stored as spans, meaning exact character offsets anchored to the original text.
โข ๐๐ถ๐ฒ๐ฟ๐ฎ๐ฟ๐ฐ๐ต๐ถ๐ฐ๐ฎ๐น ๐๐ฒ๐ด๐บ๐ฒ๐ป๐๐ฎ๐๐ถ๐ผ๐ป, ๐ป๐ผ๐ ๐ท๐๐๐ ๐ฒ๐ ๐๐ฟ๐ฎ๐ฐ๐๐ถ๐ผ๐ป ๐ It deconstructs a documentโs full nested hierarchy, down to chapters, sections, clauses, schedules, signatures, and even singular sentences, and classifies each span with dozens of contextual features.
โข ๐๐ป๐๐ถ๐๐ ๐ฒ๐ ๐๐ฟ๐ฎ๐ฐ๐๐ถ๐ผ๐ป, ๐ฑ๐ถ๐๐ฎ๐บ๐ฏ๐ถ๐ด๐๐ฎ๐๐ถ๐ผ๐ป, ๐ฎ๐ป๐ฑ ๐น๐ถ๐ป๐ธ๐ถ๐ป๐ด ๐ It resolves what references actually point to, then links entities, citations, and cross-references into a single coherent graph.
โข ๐๐ฟ๐ฎ๐ฝ๐ต-๐ณ๐ถ๐ฟ๐๐ ๐ฒ๐ณ๐ณ๐ถ๐ฐ๐ถ๐ฒ๐ป๐ฐ๐ ๐โโก๏ธ Small enough to run locally on a consumer PC with sub-second latency, and it stays reliable on long documents where front
Dear Hugging Face team, can we please have a way to archive hf repositories / spaces? I have a bunch of spaces that used to work but don't any more due to the hf space implementations changing and i think it would be good if I could archive those like in GitHub.
React to this post if you want to see this feature! ๐ก