ModelCloud/QwQ-32B-Preview-gptqmodel-4bit-vortex-v2 Text Generation • 7B • Updated Dec 18, 2024 • 37 • 16
ModelCloud/QwQ-32B-Preview-gptqmodel-4bit-vortex-v3 Text Generation • 7B • Updated Dec 20, 2024 • 23 • 14
ModelCloud/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-v1 Text Generation • 2B • Updated Dec 21, 2024 • 16 • 3
RedHatAI/Llama-3.1-Nemotron-70B-Instruct-HF-quantized.w4a16 Text Generation • 11B • Updated Jan 3 • 12
RedHatAI/DeepSeek-Coder-V2-Instruct-0724-quantized.w4a16 Text Generation • 32B • Updated Jan 12 • 47 • 1
ModelCloud/DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v1 Text Generation • 2B • Updated Jan 24 • 25 • 5
ModelCloud/DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2 Text Generation • 2B • Updated Jan 24 • 289 • 7
RedHatAI/Mistral-Small-24B-Instruct-2501-quantized.w4a16 Text Generation • 4B • Updated Oct 29 • 61 • 1
RedHatAI/DeepSeek-R1-Distill-Qwen-14B-quantized.w4a16 Text Generation • 3B • Updated Feb 27 • 968 • 1
RedHatAI/DeepSeek-R1-Distill-Qwen-32B-quantized.w4a16 Text Generation • 6B • Updated Feb 27 • 1.47k • 5
RedHatAI/DeepSeek-R1-Distill-Llama-70B-quantized.w4a16 Text Generation • 11B • Updated Feb 27 • 269 • 5
RedHatAI/DeepSeek-R1-Distill-Qwen-1.5B-quantized.w4a16 Text Generation • 0.6B • Updated Feb 27 • 40 • 1
RedHatAI/Pixtral-Large-Instruct-2411-hf-quantized.w4a16 Image-Text-to-Text • 19B • Updated Mar 31 • 575