Coding Models Qwen/Qwen3-Coder-480B-A35B-Instruct Text Generation • 480B • Updated Aug 21, 2025 • 41.5k • • 1.33k
Models OpenGVLab/InternVL3-78B-AWQ Image-Text-to-Text • Updated Sep 11, 2025 • 296 • 10 OpenGVLab/InternVL3-78B Image-Text-to-Text • Updated Sep 11, 2025 • 38.7k • 234 google-t5/t5-base Translation • Updated Feb 14, 2024 • 1.67M • • 774 HuggingFaceH4/zephyr-7b-alpha Text Generation • 7B • Updated Oct 16, 2024 • 5.4k • • 1.12k
Vision Models genmo/mochi-1-preview Text-to-Video • Updated Sep 4, 2025 • 7.41k • • 1.33k stabilityai/stable-diffusion-3.5-large Text-to-Image • Updated Oct 22, 2024 • 50.7k • • 3.45k stabilityai/stable-diffusion-3.5-medium Text-to-Image • Updated Oct 31, 2024 • 252k • • 926
Must Reads ZeroSearch: Incentivize the Search Capability of LLMs without Searching Paper • 2505.04588 • Published May 7, 2025 • 65
ZeroSearch: Incentivize the Search Capability of LLMs without Searching Paper • 2505.04588 • Published May 7, 2025 • 65
Coding Models Qwen/Qwen3-Coder-480B-A35B-Instruct Text Generation • 480B • Updated Aug 21, 2025 • 41.5k • • 1.33k
Vision Models genmo/mochi-1-preview Text-to-Video • Updated Sep 4, 2025 • 7.41k • • 1.33k stabilityai/stable-diffusion-3.5-large Text-to-Image • Updated Oct 22, 2024 • 50.7k • • 3.45k stabilityai/stable-diffusion-3.5-medium Text-to-Image • Updated Oct 31, 2024 • 252k • • 926
Models OpenGVLab/InternVL3-78B-AWQ Image-Text-to-Text • Updated Sep 11, 2025 • 296 • 10 OpenGVLab/InternVL3-78B Image-Text-to-Text • Updated Sep 11, 2025 • 38.7k • 234 google-t5/t5-base Translation • Updated Feb 14, 2024 • 1.67M • • 774 HuggingFaceH4/zephyr-7b-alpha Text Generation • 7B • Updated Oct 16, 2024 • 5.4k • • 1.12k
Must Reads ZeroSearch: Incentivize the Search Capability of LLMs without Searching Paper • 2505.04588 • Published May 7, 2025 • 65
ZeroSearch: Incentivize the Search Capability of LLMs without Searching Paper • 2505.04588 • Published May 7, 2025 • 65