Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

SeaWolf-AIΒ 
posted an update 2 days ago
view post
Post
10800
🏟️ Smol AI WorldCup: A 4B Model Just Beat 8B β€” Here's the Data

We evaluated 18 small language models from 12 makers on 125 questions across 7 languages. The results challenge the assumption that bigger is always better.

Community Article: https://huggingface.co/blog/FINAL-Bench/smol-worldcup
Live Leaderboard: ginigen-ai/smol-worldcup
Dataset: ginigen-ai/smol-worldcup

What we found:

β†’ Gemma-3n-E4B (4B, 2GB RAM) outscores Qwen3-8B (8B, 5.5GB). Doubling parameters gained only 0.4 points. RAM cost: 2.75x more.

β†’ GPT-OSS-20B fits in 1.5GB yet matches Champions-league dense models requiring 8.5GB. MoE architecture is the edge AI game-changer.

β†’ Thinking models hurt structured output. DeepSeek-R1-7B scores 8.7 points below same-size Qwen3-8B and runs 2.7x slower.

β†’ A 1.3B model fabricates confident fake content 80% of the time when prompted with nonexistent entities. Qwen3 family hits 100% trap detection across all sizes.

β†’ Qwen3-1.7B (1.2GB) outscores Mistral-7B, Llama-3.1-8B, and DeepSeek-R1-14B. Latest architecture at 1.7B beats older architecture at 14B.

What makes this benchmark different?

Most benchmarks ask "how smart?" β€” we measure five axes simultaneously: Size, Honesty, Intelligence, Fast, Thrift (SHIFT). Our ranking metric WCS = sqrt(SHIFT x PIR_norm) rewards models that are both high-quality AND efficient. Smart but massive? Low rank. Tiny but poor? Also low.

Top 5 by WCS:
1. GPT-OSS-20B β€” WCS 82.6 β€” 1.5GB β€” Raspberry Pi tier
2. Gemma-3n-E4B β€” WCS 81.8 β€” 2.0GB β€” Smartphone tier
3. Llama-4-Scout β€” WCS 79.3 β€” 240 tok/s β€” Fastest model
4. Qwen3-4B β€” WCS 76.6 β€” 2.8GB β€” Smartphone tier
5. Qwen3-1.7B β€” WCS 76.1 β€” 1.2GB β€” IoT tier

Built in collaboration with the FINAL Bench research team. Interoperable with ALL Bench Leaderboard for full small-to-large model comparison.

Dataset is open under Apache 2.0 (125 questions, 7 languages). We welcome new model submissions.
  • 1 reply
Β·
SeaWolf-AIΒ 
posted an update 3 days ago
view post
Post
8027
πŸš€ Introducing MARL β€” Runtime Middleware That Reduces LLM Hallucination Without Fine-Tuning

Now available on PyPI Β· GitHub Β· ClawHub Β· HuggingFace
AI models sense they could be wrong, but they can't actually fix what's broken.

πŸ€— Live A/B test: VIDraft/MARL

We evaluated 9 SOTA models (GPT-5.2, Claude Opus 4.6, Gemini 3 Pro, etc.) across 1,800 assessments in FINAL Bench and found a 39.2%p gap between "recognizing potential errors (MA=0.694)" and "actually finding and fixing them (ER=0.302)."

MARL (Model-Agnostic Runtime Middleware for LLMs) was built to close this metacognitive gap. It decomposes a single LLM call into a 5-stage expert pipeline (Hypothesis β†’ Solver β†’ Auditor β†’ Adversarial Verifier β†’ Synthesizer), transforming "answer in one shot" into "think, doubt, correct, and rewrite."

No weight modification β€” works instantly with GPT-5.4, Claude, Gemini, Llama, or any OpenAI API-compatible LLM by changing one line: base_url. Ships with 9 domain-specific emergence engines (invention, pharma, genomics, chemistry, ecology, law, and more β€” 5,538 expert data items) activated by a simple tag like model="gpt-5.4::pharma".

pip install marl-middleware

MARL is also officially registered on ClawHub, the skill marketplace of OpenClaw β€” an AI agent platform with 260K+ developers and 3,200+ skills. It's the first middleware in the Reasoning Enhancement category. One command β€” clawhub install marl-middleware β€” gives your AI agent a metacognition upgrade.

πŸ“ Technical deep dive: https://huggingface.co/blog/FINAL-Bench/marl-middleware
πŸ“¦ PyPI: https://pypi.org/project/marl-middleware/
πŸ™ GitHub: https://github.com/Vidraft/MARL
πŸ¦€ ClawHub: https://clawhub.ai/Cutechicken99/marl-middleware

#MARL #LLM #Hallucination #Metacognition #MultiAgent #AIMiddleware #FINALBench #OpenClaw #ClawHub #PyPI #AGI #HuggingFace #ReasoningAI #SelfCorrection #GlassBoxAI
JonnaMatΒ 
posted an update about 22 hours ago
view post
Post
1888
πŸš€ FlashHead: Efficient Drop-In Replacement for the Classification Head in Language Model Inference

πŸ”Ž Check out our latest FlashHead-enabled model: embedl/Cosmos-Reason2-2B-W4A16-Edge2-FlashHead

🧩 Seamless integration with vllm:
docker run --rm -it \
  --network host \
  --shm-size=8g \
  --ulimit memlock=-1 \
  --ulimit stack=67108864 \
  --runtime=nvidia \
  --name=vllm-serve \
  -e HF_TOKEN=hf_*** \
  -e HF_HOME=/root/.cache/huggingface \
  embedl/vllm:latest-jetson-orin-flashhead \
  vllm serve "embedl/Cosmos-Reason2-2B-W4A16-Edge2-FlashHead" \
    --max-model-len 8192 \
    --gpu-memory-utilization 0.75 \
    --max-num-seqs 2 \
    --trust-remote-code


  • 1 reply
Β·
sdiazlorΒ 
posted an update 1 day ago
view post
Post
559
More OSS than ever with the latest pruna 0.3.2 release. It extends existing algorithm families, such as compilers, kernels, and pruners, and adds new ones, including decoders, distillers, enhancers, and recoverers. But it's not only a collection of algorithms; instead, you can easily combine them to get the biggest efficiency win.

Read the full blog here: https://huggingface.co/blog/PrunaAI/pruna-0-3-2-open-source-optimization-algorithms
branikitaΒ 
posted an update 1 day ago
view post
Post
2262
Testing a parallel gripper with a MaixSense-A010 ToF depth camera (100-point sensor) and pressure sensors.

By combining depth data with force feedback, the gripper closes only when the object is in a graspable position. If the object slips or leaves the grasp zone before closing, the system can automatically retry β€” as shown in the video.

Gripper repository (version without camera and sensors):
https://github.com/roboninecom/SO-ARM100-101-Parallel-Gripper
  • 1 reply
Β·
kanaria007Β 
posted an update about 6 hours ago
view post
Post
22
βœ… Article highlight: *Ethics as Institutional Interface* (v0.1)

TL;DR:
Ethics in SI-Core should not behave like a static safety filter or a one-time compliance checklist. It should behave more like an institution: with roles, principals, red lines, appeals, overrides, break-glass procedures, and civic oversight around auditable runtime decisions.

Read:
kanaria007/agi-structural-intelligence-protocols

Why it matters:
β€’ treats ethics as a structural interface: who can do what to whom, under which constraints, with which recourse
β€’ separates ethical governance into red-line zones, review zones, and metric zones
β€’ makes appeals, overrides, and break-glass explicit, traceable, and reviewable
β€’ connects ETH to PoLB experiments, ID / Role / Persona, and civic oversight

What’s inside:
β€’ ETH as: Principal Γ— Role/Persona Γ— Context β†’ ETH-Constraints β†’ ETHDecision
β€’ a portable ETHDecision object shape (ALLOW | DENY | ESCALATE + exported governance verdicts)
β€’ red lines vs review-required cases vs metric-monitored cases
β€’ appeals (policy change), overrides (case-specific human intervention), and break-glass (pre-negotiated emergency procedure)
β€’ ETH Γ— PoLB Γ— experiments: how ethics becomes a design partner for rollout and evaluation
β€’ ETH Γ— ID Γ— Role & Persona: per-principal constraints, role capability gates, and persona-aware explanations
MohamedRashadΒ 
posted an update about 8 hours ago
view post
Post
43
We just released Navid-AI/Arabic-TTS-Arena
for the arabic communities to evaluate and rank different Arabic Text to Speech models.

Please, check it out if you are a native arabic speaker. We think you will love it πŸ€—
smirkiΒ 
posted an update about 10 hours ago
view post
Post
53
Introducing OmniCoder-9B

We trained a 9B coding agent on 425K real agentic trajectories from Claude Opus 4.6, GPT-5.4, GPT-5.3-Codex, and Gemini 3.1 Pro across Claude Code, OpenCode, Codex, and Droid scaffolding.

Results:
- GPQA Diamond: 83.8 pass@1 (166/198), 86.4 pass@3 β€” above GPT-OSS-120B (80.1), Qwen3.5-9B (81.7), and Claude Haiku 4.5 (73)
- AIME 2025: 90 pass@5 (27/30)
- Terminal-Bench 2.0: 28.1 (25/89) β€” +8.1 points over base model

The key insight: We trained on what frontier agents actually do, real tool calls, real error recovery, real edit diffs. The model learns read-before-write patterns, responds to LSP diagnostic, and applies minimal diffs instead of full rewrites.

Base: Qwen3.5-9B. LoRA SFT, 4x H200, Axolotl, 99.35% packing efficiency.

Weights:
Tesslate
huggingface.co/Tesslate/OmniCoder-9B
GGUF: huggingface.co/Tesslate/OmniCoder-9B-GGUF
Apache 2.0. Run it locally.
AbstractPhilΒ 
posted an update about 13 hours ago
view post
Post
87
geolip-captionbert-8192

This bert is currently being distilled using 5 bert teachers using the conceptual captions dataset. The recall accuracy is based on the whitened procrustes alignment, and the losses reflect keeping that rotation aligned correctly.

The expectation from the smaller prototypes show this model will align to 100% accuracy recall based on the most optimal opinions based on the correct answer, aligning specifically to the correct answers in conjunction with all the geometric losses.

No joke, this may be the smallest, least computation, most accurate, and fastest bert I've trained thus far - and it will be based entirely on five teachers simultaneously feeding opinions through a relay hub.
  • 8 replies
Β·
Teen-DifferentΒ 
posted an update 1 day ago
view post
Post
111
Adaptive Attention at Inference Time: Does It Actually Work?

A hypernetwork that rewires GPT's value heads on every forward pass. The answer: not a clean win β€” but not a failure either.

Blog post: https://teendifferent.substack.com/p/adaptive-attention-at-inference-time
Code: https://github.com/REDDITARUN/a-gpt
Weights: Teen-Different/adaptive-gpts


What This Is

Five small language model variants trained for 12k steps on a 300M token mixed corpus, answering one question: can the residual stream be used to slightly rewrite the model's own computation while it's running?

Instead of a fixed W_v for every context, a TinyHeadTransformer hypernetwork generates low-rank (LoRA-style) updates to the value projection of each attention head β€” conditioned on the current residual stream. Each token gets a dynamically adapted value transformation.


The Five Models

Base GPT β€” 28.9M params, 139 tok/s, val loss ~3.82
Matched GPT (+2 layers) β€” 30.5M params, 204 tok/s, val loss ~3.80
Adaptive GPT β€” 30.5M params, 38.7 tok/s, val loss ~3.88–3.92
Diffusion GPT β€” 28.9M params, 110 tok/s, val loss ~5.0–5.2
Adaptive Diffusion GPT β€” 30.5M params, 40.4 tok/s, val loss ~5.0–5.2

Architecture: 4 layers, 4 heads, d_model=256, context=256, RoPE, GPT-2 tokenizer.


How the Hypernetwork Works

For each attention head, a TinyHeadTransformer encodes the head's residual stream slice, mean-pools it to a conditioning vector, then projects into low-rank factors A (dΓ—r) and B (rΓ—d) at rank=8. The dynamic value update follows LoRA conventions with alpha/r scaling. B is zero-initialized so the adaptive path starts inert and the model begins as a vanilla GPT β€” critical for training stability.

The diffusion variant uses bidirectional attention, RMSNorm, squared ReLU, and a learned timestep embedding.