Yacine Jernite's picture

Yacine Jernite PRO

yjernite
huggingface

AI & ML interests

Technical, community, and regulatory tools of AI governance @HuggingFace

Recent Activity

liked a dataset 1 day ago
crownelius/Opus-4.6-Reasoning-3300x
liked a dataset 1 day ago
togethercomputer/CoderForge-Preview
reacted to SeaWolf-AI's post with 🚀 1 day ago
ALL Bench — Global AI Model Unified Leaderboard https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard If you've ever tried to compare GPT-5.2 and Claude Opus 4.6 side by side, you've probably hit the same wall: the official Hugging Face leaderboard only tracks open-source models, so the most widely used AI systems simply aren't there. ALL Bench fixes that by bringing closed-source models, open-weight models, and — uniquely — all four teams under South Korea's national sovereign AI program into a single leaderboard. Thirty-one frontier models, one consistent scoring scale. Scoring works differently here too. Most leaderboards skip benchmarks a model hasn't submitted, which lets models game their ranking by withholding results. ALL Bench treats every missing entry as zero and divides by ten, so there's no advantage in hiding your weak spots. The ten core benchmarks span reasoning (GPQA Diamond, AIME 2025, HLE, ARC-AGI-2), coding (SWE-bench Verified, LiveCodeBench), and instruction-following (IFEval, BFCL). The standout is FINAL Bench — the world's only benchmark measuring whether a model can catch and correct its own mistakes. It reached rank five in global dataset popularity on Hugging Face in February 2026 and has been covered by Seoul Shinmun, Asia Economy, IT Chosun, and Behind. Nine interactive charts let you explore everything from composite score rankings and a full heatmap to an open-vs-closed scatter plot. Operational metrics like context window, output speed, and pricing are included alongside benchmark scores. All data is sourced from Artificial Analysis Intelligence Index v4.0, arXiv technical reports, Chatbot Arena ELO ratings, and the Korean Ministry of Science and ICT's official evaluation results. Updates monthly.
View all activity

Organizations

Hugging Face's profile picture Society & Ethics's profile picture BigScience Workshop's profile picture GEM benchmark's profile picture BigScience Catalogue Data's profile picture BigScience Data's profile picture HF Task Exploration's profile picture HuggingFaceM4's profile picture BigCode's profile picture Stable Bias's profile picture Hugging Face H4's profile picture 🤗 H4 Community's profile picture Hugging Face OSS Metrics's profile picture BigCode Data's profile picture Stable Diffusion Bias Eval's profile picture Librarian Bots's profile picture Blog-explorers's profile picture EvalEval Coalition's profile picture llm-values's profile picture Bias Leaderboard Development's profile picture AI Energy Score's profile picture Journalists on Hugging Face's profile picture Social Post Explorers's profile picture Frugal AI Challenge's profile picture Open R1's profile picture Open Agents's profile picture Hugging Face ML & Society Team's profile picture AI companionship's profile picture Mighty Morphin Power Rangers's profile picture ROOST (Robust Open Online Safety Tools)'s profile picture Economies's profile picture Toad HF Inference Explorers's profile picture Unsloth Jobs Explorers's profile picture