Inference Providers
Active filters: quantllm
codewithdark/Llama-3.2-3B-4bit
3B • Updated • 5
codewithdark/Llama-3.2-3B-GGUF-4bit
3B • Updated • 2
codewithdark/Llama-3.2-3B-4bit-mlx
Text Generation
• 3B • Updated • 74
QuantLLM/Llama-3.2-3B-4bit-mlx
Text Generation
• 3B • Updated • 24
QuantLLM/Llama-3.2-3B-2bit-mlx
Text Generation
• 3B • Updated • 41
QuantLLM/Llama-3.2-3B-8bit-mlx
Text Generation
• 3B • Updated • 27
QuantLLM/Llama-3.2-3B-5bit-mlx
Text Generation
• 3B • Updated • 20
QuantLLM/Llama-3.2-3B-5bit-gguf
3B • Updated • 13
QuantLLM/Llama-3.2-3B-2bit-gguf
3B • Updated • 27
QuantLLM/functiongemma-270m-it-8bit-gguf
0.3B • Updated • 7
• 1
QuantLLM/functiongemma-270m-it-4bit-gguf
0.3B • Updated • 14
QuantLLM/functiongemma-270m-it-4bit-mlx
Text Generation
• 0.3B • Updated • 6