mlx-community/GLM-5-MXFP4-Q8

This model was converted to MLX format from zai-org/GLM-5 using a custom MXFP4-Q8 quantization scheme.

GLM-5 is a 744B parameter (40B active) Mixture-of-Experts model developed by Z.ai, targeting complex systems engineering and long-horizon agentic tasks. It uses Multi-Head Latent Attention (MLA) with 47 transformer layers, 64 routed experts (4 active per token), and 1 shared expert.

Quantization

This model uses a mixed-precision quantization.

Component Mode Bits Group Size
Expert weights (switch_mlp) MXFP4 4 32
Attention, embeddings, shared expert, dense MLP, lm_head Affine 8 64

Use with mlx-lm

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/GLM-5-MXFP4-Q8")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
152
Safetensors
Model size
744B params
Tensor type
BF16
U32
F32
U8
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for mlx-community/GLM-5-MXFP4-Q8

Base model

zai-org/GLM-5
Quantized
(11)
this model