This is a MXFP4 quant of Qwen3-Next-80B-A3B-Instruct

The GGUF has been updated, please download the latest llama.cpp in order to use it.

Downloads last month
534
GGUF
Model size
80B params
Architecture
qwen3next
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for noctrex/Qwen3-Next-80B-A3B-Instruct-MXFP4_MOE-GGUF

Quantized
(79)
this model

Collection including noctrex/Qwen3-Next-80B-A3B-Instruct-MXFP4_MOE-GGUF