See GLM-5 MLX in action - demonstration video

Tested on a M3 Ultra 512GB RAM using Inferencer app v1.10

  • Single inference ~16.6 tokens/s @ 1000 tokens
  • Batched inference ~31.8 total tokens/s across six inferences
  • Memory usage: ~417 GiB

q4.8bit quant typically achieves 1.281 perplexity in our coding test

Quantization Perplexity Token Accuracy Missed Divergence
q3.5 168.0 43.45% 72.57%
q4.5 1.33593 91.65% 27.61%
q4.8 1.28125 93.75% 21.15%
q5.5 1.23437 95.05% 17.28%
q6.5 1.21875 96.95% 12.03%
q8.5 1.21093 97.55% 10.50%
q9 1.21093 97.55% 10.50%
Base 1.20312 100.0% 0.000%
  • Perplexity: Measures the confidence for predicting base tokens (lower is better)
  • Token Accuracy: The percentage of correctly generated base tokens
  • Missed Divergence: Measures severity of misses; how much the token was missed by
Quantized with a modified version of MLX
For more details see demonstration video or visit GLM-5.

Disclaimer

We are not the creator, originator, or owner of any model listed. Each model is created and provided by third parties. Models may not always be accurate or contextually appropriate. You are responsible for verifying the information before making important decisions. We are not liable for any damages, losses, or issues arising from its use, including data loss or inaccuracies in AI-generated content.

Downloads last month
1,710
Safetensors
Model size
744B params
Tensor type
BF16
U32
F32
MLX
Hardware compatibility
Log In to add your hardware

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for inferencerlabs/GLM-5-MLX-4.8bit

Base model

zai-org/GLM-5
Quantized
(11)
this model