Inference Providers
Active filters: amd
lablab-ai-amd-developer-hackathon/CyberSecQwen-4B
Text Generation
• 4B • Updated • 173
• 7
llm-semantic-router/modernbert-base-32k-haldetect
Token Classification
• 0.1B • Updated • 111
• 2
Image-Text-to-Text
• Updated • 44
• 1
athena129/CyberSecQwen-4B
Text Generation
• 4B • Updated • 108
• 1
mradermacher/CyberSecQwen-4B-GGUF
4B • Updated • 710
• 1
eric717273/Qwen3.5-122B-A10B-MXFP4_MOE-MTP-GGUF
Text Generation
• 125B • Updated • 1
0xSero/Qwen3.6-35B-A3B-GGUF-Strix
35B • Updated • 3.98k
• 19
lablab-ai-amd-developer-hackathon/OncoAgent-v1.0-27B
Text Generation
• Updated • 6
lablab-ai-amd-developer-hackathon/OncoAgent-v1.0-9B
Text Generation
• Updated • 6
dahara1/llama3-8b-amd-npu
Tech-Meld/gpus-everywhere
Text-to-Image
• Updated • 10
• • 1
dahara1/llama3.1-8b-Instruct-amd-npu
dahara1/ALMA-Ja-V3-amd-npu
dahara1/llama-translate-amd-npu
Translation
• Updated • 5
dahara1/llama-translate-gguf
8B • Updated • 183
• 16
amd/Llama-2-7b-hf-awq-g128-int4-asym-bf16-onnx-ryzen-strix
Text Generation
• Updated • 63
amd/Llama2-7b-chat-awq-g128-int4-asym-bf16-onnx-ryzen-strix
Text Generation
• Updated • 8
amd/Llama-3-8B-awq-g128-int4-asym-bf16-onnx-ryzen-strix
Text Generation
• Updated • 4
• 2
amd/Llama-3.1-8B-awq-g128-int4-asym-bf16-onnx-ryzen-strix
Text Generation
• Updated • 6
• 2
amd/Phi-3.5-mini-instruct-awq-g128-int4-asym-bf16-onnx-ryzen-strix
Text Generation
• Updated • 12
• 3
amd/Phi-3-mini-4k-instruct-awq-g128-int4-asym-bf16-onnx-ryzen-strix
Text Generation
• Updated • 10
• 2
uday610/Llama2-7b-chat-awq-g128-int4-asym-fp32-onnx-ryzen-strix-hybrid
Text Generation
• Updated amd/Phi-3.5-mini-instruct-awq-g128-int4-asym-fp16-onnx-dml
Text Generation
• Updated amd/Phi-3-mini-4k-instruct-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
• Updated • 5
amd/Phi-3.5-mini-instruct-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
• Updated • 6
amd/Llama-2-7b-chat-hf-awq-g128-int4-asym-fp16-onnx-dml
Text Generation
• Updated amd/Llama-2-7b-hf-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
• Updated • 3
amd/Llama-2-7b-chat-hf-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
• Updated • 6
amd/Llama-3-8B-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
• Updated • 3