See Kimi-K2.5 MLX in action - demonstration video
Tested on a M3 Ultra 512GB + M4 Max 128GB using distributed compute over Wifi with Inferencer app v1.9.5
- Single inference ~22.2 tokens/s @ 1000 tokens
- Memory usage: ~513 GiB
- For a larger context window you can expand the RAM limit:
- sudo sysctl iogpu.wired_limit_mb=510000
q4.2bit quant typically achieves 1.39062 perplexity in our coding test
| Quantization | Perplexity | Token Accuracy | Missed Divergence |
|---|---|---|---|
| q3.5 | 168.0 | 43.45% | 72.57% |
| q3.6 | 1.59375 | 83.75% | 41.11% |
| q3.8 | 1.49218 | 86.65% | 36.79% |
| q4.2 | 1.39062 | 89.30% | 31.70% |
| q4.5 | 1.33593 | 91.65% | 27.61% |
| q6.5 | 1.21875 | 96.95% | 12.03% |
| Base | 1.20312 | 100.0% | 0.000% |
- Perplexity: Measures the confidence for predicting base tokens (lower is better)
- Token Accuracy: The percentage of correctly generated base tokens
- Missed Divergence: Measures severity of misses; how much the token was missed by
Quantized with a modified version of MLX
For more details see demonstration video or visit Kimi-K2.5.
Disclaimer
We are not the creator, originator, or owner of any model listed. Each model is created and provided by third parties. Models may not always be accurate or contextually appropriate. You are responsible for verifying the information before making important decisions. We are not liable for any damages, losses, or issues arising from its use, including data loss or inaccuracies in AI-generated content.
- Downloads last month
- 2,948
Model size
1T params
Tensor type
BF16
路
U32
路
F32
路
Hardware compatibility
Log In
to add your hardware
4-bit
Model tree for inferencerlabs/Kimi-K2.5-MLX-4.2bit
Base model
moonshotai/Kimi-K2.5