File size: 1,824 Bytes
92e0438
 
 
 
 
 
 
 
 
4ac05ee
 
 
5a24ef8
92e0438
bae8d1a
92e0438
 
 
 
bae8d1a
92e0438
 
 
 
 
 
 
 
 
 
5a24ef8
ffab77c
92e0438
 
69fb5f3
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
license: other
license_name: modified-mit
library_name: mlx
tags:
- mlx
pipeline_tag: text-generation
base_model: moonshotai/Kimi-K2-Instruct-0905
---
# NOTICE
No longer available on HF due to storage restrictions - [archived here](https://modelscope.ai/models/inferencerlabs/Kimi-K2-Instruct-0905-MLX-3.8bit-archive)

**See Kimi-K2-Instruct-0905 Dynamic MLX in action - [https://youtu.be/Ia-q3Ll4tAY](https://youtu.be/Ia-q3Ll4tAY)**

*q3.825bit dynamic quant typically achieves 1.256 perplexity in our testing, slotting closer to q4 perplexity (1.168) than q3 perplexity (1.900).*
| Quantization | Perplexity |
|:------------:|:----------:|
| **q2**       | 41.293     |
| **q3**       | 1.900      |
| **q3.825**   | 1.256      |
| **q3.985**   | 1.243      |
| **q4**       | 1.168      |
| **q5**       | 1.141      |
| **q6**       | 1.128      |
| **q8**       | 1.128      |

## Usage Notes

* Runs on a single M3 Ultra 512GB RAM using [Inferencer app](https://inferencer.com)
* Does not require expanding VRAM limit
  * However expanding it will allow you to use larger context windows:
    * `sudo sysctl iogpu.wired_limit_mb=507000`
* Expect ~20 tokens/s
* Quantized with a modified version of [MLX](https://github.com/ml-explore/mlx) 0.26
* For more details see [demonstration video](https://youtu.be/Ia-q3Ll4tAY) or visit [Kimi K2](https://huggingface.co/moonshotai/Kimi-K2-Instruct-0905).

## Disclaimer

We are not the creator, originator, or owner of any model listed. Each model is created and provided by third parties. Models may not always be accurate or contextually appropriate. You are responsible for verifying the information before making important decisions. We are not liable for any damages, losses, or issues arising from its use, including data loss or inaccuracies in AI-generated content.