CURRENTLY UPLOADING...

CURRENTLY UPLOADING...

CURRENTLY UPLOADING...

See GLM-5 MLX over in action - demonstration video

Tested across a M3 Ultra 512GB RAM and M4 Max 128GB RAM with Inferencer v1.10.1 distributed compute

  • Distributed inference ~12.5 tokens/s @ 1000 tokens
  • Memory usage: ~444 GB / 49GB

q5.6bit quant is currently unranked, but should fit above q5.5

Quantization Perplexity Token Accuracy Missed Divergence
q3.5 168.0 43.45% 72.57%
q4.5 1.33593 91.65% 27.61%
q4.8 1.28125 93.75% 21.15%
q5.5 1.23437 95.05% 17.28%
q6.5 1.21875 96.95% 12.03%
q8.5 1.21093 97.55% 10.50%
q9 1.21093 97.55% 10.50%
Base 1.20312 100.0% 0.000%
  • Perplexity: Measures the confidence for predicting base tokens (lower is better)
  • Token Accuracy: The percentage of correctly generated base tokens
  • Missed Divergence: Measures severity of misses; how much the token was missed by
Quantized with a modified version of MLX
For more details see demonstration video or visit GLM-5.

Disclaimer

We are not the creator, originator, or owner of any model listed. Each model is created and provided by third parties. Models may not always be accurate or contextually appropriate. You are responsible for verifying the information before making important decisions. We are not liable for any damages, losses, or issues arising from its use, including data loss or inaccuracies in AI-generated content.

Downloads last month
-
MLX
Hardware compatibility
Log In to add your hardware

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for inferencerlabs/GLM-5-MLX-5.6bit

Base model

zai-org/GLM-5
Quantized
(11)
this model