4-bit quantization: MXFP4_MOE vs Q4_K_XL?
Just wanting to get some insights from the community. Running this model with llama.cpp (I think the best for running gguf, though feedback/comments on this are also welcome :D ) on a 140GB H200. Why one over the other? speed? "intelligence"?
Thanks!!
MXFP4 is apparently much faster at the cost of slight quality.
I tried mxfp4 on llama.cpp, since I don't have enough VRAM to fit the whole model on GPU I'm not surprised I didn't notice any speed difference. When using tools I can't say I saw a difference in behavior, but I didn't test it much. I'll stick with Q4_K_XL.
So only reason I would switch to it is if it gave me more token/s - and in my case it does not.
Also my overall impression on GLM 4.7 Flash (Q4) after a few weeks is that I prefer it over gpt-oss 20B (my other main model) for design, planning and troubleshooting. But at some point it starts spinning its wheel during implementation, so I switch back to gpt-oss and that usually takes the job passed the finish line. It's been a good combo for me over the last 2 weeks.
MXFP4 is apparently much faster at the cost of slight quality.
Everybody's mxfp4 quants use f32/q8/mxfp4 for precision except gpt oss models, it uses f32/f16/mxfp4.
Which makes it much faster (f32/q8_0/mxfp4) but degrades the quality quite a lot where the gpt oss models have higher quality output.
Easiest way to notice without getting too into the weeds is comparing the gpt oss 20b models. And also looking at their structure versus all other mxfp4 gguf quants.
I think this has a lot to do why the gpt oss models are so good in mxfp4 format, while all the rest are just known to be faster at the expense of quality.
https://huggingface.co/ggml-org/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-mxfp4.gguf - q8_0/f32/mxfp4, 12.1gb.
https://huggingface.co/unsloth/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-F16.gguf - f16/f32/mxfp4 , 13.8gb
The ollama model is the same as the unsloth and notably way better than the ggml version in terms of accuracy and coherency:
https://ollama.com/library/gpt-oss:20b/blobs/e7b273f96360, bf16/f32/mxfp4, 14gb.
this is the same across the board. All current mxfp4 gguf quants are structured like the ggml as far can I can tell. unsloth, noctrex, ggml, etc... all gguf mxfp4's.
Qwen3-coder-next for example, same structure: https://huggingface.co/unsloth/Qwen3-Coder-Next-GGUF/blob/main/Qwen3-Coder-Next-MXFP4_MOE.gguf
When pitting the models directly against eachother (eg. gpt oss 20b/120b from ggml or other quantizers versus the ollama version or the unsloth version) the ollama and unsloth are notably slower, and prompt processing is a little slower, but they are also very notably more accurate. The size difference doesn't seem to significant. ~2gb difference between the two 120b models.
Also for argument sake, "the difference bettween f16 and q8_0 is negligible" is fine for roleplaying. But for coding, research, or agentic tasks is a lot more noticeable than negligible.
Thanks for your work Daniel!
MXFP4 is apparently much faster at the cost of slight quality.
Is it possible to see somewhere comparison how big exactly this quality drop?

