metadata
base_model:
- zai-org/GLM-5
MoE-quants of GLM-5 (Q8_0 quantization default with routed experts quantized further)
Note: running this GGUF requires pulling and compiling this llama.cpp PR: https://github.com/ggml-org/llama.cpp/pull/19460
More quants to come soon.
| Quant | Size | Mixture | PPL | KLD |
|---|---|---|---|---|
| Q4_K_M | 432.80 GiB (4.93 BPW) | Q8_0-Q4_K-Q4_K-Q5_K | 8.7486 ± 0.17123 | TBD |