bibproj commited on
Commit
87acb41
·
verified ·
1 Parent(s): fa0cc0a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +123 -5
README.md CHANGED
@@ -1,5 +1,123 @@
1
- ---
2
- license: other
3
- license_name: modified-mit
4
- license_link: https://huggingface.co/moonshotai/Kimi-K2-Instruct-0905/blob/main/LICENSE
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: modified-mit
4
+ license_link: https://huggingface.co/moonshotai/Kimi-K2-Instruct-0905/blob/main/LICENSE
5
+ library_name: mlx
6
+ tags:
7
+ - mlx
8
+ pipeline_tag: text-generation
9
+ base_model: moonshotai/Kimi-K2-Instruct-0905
10
+ ---
11
+
12
+ . . .
13
+
14
+ # UPLOADING FILES ...
15
+
16
+ ---
17
+
18
+ # mlx-community/moonshotai_Kimi-K2-Instruct-0905-mlx-DQ3_K_M
19
+
20
+ This model [mlx-community/moonshotai_Kimi-K2-Instruct-0905-mlx-DQ3_K_M](https://huggingface.co/mlx-community/moonshotai_Kimi-K2-Instruct-0905-mlx-DQ3_K_M) was
21
+ converted to MLX format from [moonshotai/Kimi-K2-Instruct-0905](https://huggingface.co/moonshotai/Kimi-K2-Instruct-0905)
22
+ using mlx-lm version **0.26.3**.
23
+
24
+ ---
25
+
26
+ ## Who is this for?
27
+
28
+ This is created for people using a single Apple Mac Studio M3 Ultra with 512 GB. The 4-bit version of Kimi K2 does not fit. Using research results, we aim to get 4-bit performance from a slightly smaller and smarter quantization. It should also not be so large that it leaves no memory for a useful context window.
29
+
30
+ ---
31
+
32
+ ## Use this model with mlx
33
+
34
+ ```bash
35
+ pip install mlx-lm
36
+
37
+ mlx_lm.generate --model mlx-community/moonshotai_Kimi-K2-Instruct-0905-mlx-DQ3_K_M --temp 0.6 --min-p 0.01 --max-tokens 4096 --trust-remote-code --prompt "Hallo"
38
+ ```
39
+
40
+ ---
41
+
42
+ ## What is this DQ3_K_M?
43
+
44
+ In the Arxiv paper [Quantitative Analysis of Performance Drop in DeepSeek Model Quantization](https://arxiv.org/abs/2505.02390) the authors write,
45
+
46
+ > We further propose `DQ3_K_M`, a dynamic 3-bit quantization method that significantly outperforms traditional `Q3_K_M` variant on various benchmarks, which is also comparable with 4-bit quantization (`Q4_K_M`) approach in most tasks.
47
+
48
+ and
49
+
50
+ > dynamic 3-bit quantization method (`DQ3_K_M`) that outperforms the 3-bit quantization implementation in `llama.cpp` and achieves performance comparable to 4-bit quantization across multiple benchmarks.
51
+
52
+ The resulting multi-bitwidth quantization has been well tested and documented.
53
+
54
+ ---
55
+
56
+ ## How can you create your own DQ3_K_M quants?
57
+
58
+ In the `convert.py` file of mlx-lm on your system ( [you can see the original code here](https://github.com/ml-explore/mlx-lm/blob/main/mlx_lm/convert.py) ), replace the code inside `def mixed_quant_predicate()` with something like
59
+
60
+ ```python
61
+ index = (
62
+ int(path.split(".")[layer_location])
63
+ if len(path.split(".")) > layer_location
64
+ else 0
65
+ )
66
+ # Build a mixed quant like "DQ3" of Arxiv paper https://arxiv.org/abs/2505.02390
67
+ # Quantitative Analysis of Performance Drop in DeepSeek Model Quantization
68
+ q_bits = 4
69
+ if "lm_head" in path:
70
+ q_bits = 6
71
+ #if "tokens" in path:
72
+ # q_bits = 4
73
+ if "attn.kv" in path:
74
+ q_bits = 6
75
+ #if "o_proj" in path:
76
+ # q_bits = 4
77
+ #if "attn.q" in path:
78
+ # q_bits = 4
79
+ # For all "mlp" and "shared experts"
80
+ if "down_proj" in path:
81
+ q_bits = 6
82
+ #if "up_proj" in path:
83
+ # q_bits = 4
84
+ #if "gate_proj" in path:
85
+ # q_bits = 4
86
+ # For "switch experts"
87
+ if "switch_mlp.up_proj" in path:
88
+ q_bits = 3
89
+ if "switch_mlp.gate_proj" in path:
90
+ q_bits = 3
91
+ if "switch_mlp.down_proj" in path:
92
+ q_bits = 3
93
+ # Blocks 3 and 4 are higher quality
94
+ if (index == 3) or (index == 4):
95
+ q_bits = 6
96
+ # Every 5th block is "medium" quality
97
+ if (index % 5) == 0:
98
+ q_bits = 4
99
+ #print("path:", path, "index:", index, "q_bits:", q_bits)
100
+ return {"group_size": group_size, "bits": q_bits}
101
+ ```
102
+
103
+ Should you wish to squeeze more out of your quant, and you do not need to use a larger context window, you can change the last part of the above code to
104
+
105
+ ```python
106
+ if "switch_mlp.down_proj" in path:
107
+ q_bits = 4
108
+ # Blocks 3 and 4 are higher quality
109
+ if (index == 3) or (index == 4):
110
+ q_bits = 6
111
+ #print("path:", path, "index:", index, "q_bits:", q_bits)
112
+ return {"group_size": group_size, "bits": q_bits}
113
+ ```
114
+
115
+ Then create your DQ3_K_M quant with
116
+
117
+ ```bash
118
+ mlx_lm.convert --hf-path moonshotai/Kimi-K2-Instruct-0905 --mlx-path your-model-DQ3_K_M -q --quant-predicate mixed_3_4 --trust-remote-code
119
+ ```
120
+
121
+ ---
122
+
123
+ Enjoy!