OwenArli commited on
Commit
9831153
·
verified ·
1 Parent(s): d01dd9d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +178 -3
README.md CHANGED
@@ -1,3 +1,178 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ thumbnail: >-
4
+ https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/iyzgR89q50pp1T8HeeP15.png
5
+ base_model:
6
+ - cerebras/GLM-4.6-REAP-218B-A32B
7
+ pipeline_tag: text-generation
8
+ tags:
9
+ - abliterated
10
+ - derestricted
11
+ - glm-4.6
12
+ - unlimited
13
+ - uncensored
14
+ library_name: transformers
15
+ ---
16
+ <div align="left">
17
+ <img src=https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/iyzgR89q50pp1T8HeeP15.png width="5%"/>
18
+ </div>
19
+
20
+ # Arli AI
21
+
22
+ # GLM-4.6-REAP-218B-A32B-Derestricted
23
+
24
+ <div align="center">
25
+ <img src=https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/XhCz9N4liIwWEh-yH37gR.png width="15%"/>
26
+ </div>
27
+
28
+ GLM-4.6-REAP-218B-A32B-Derestricted is a **Derestricted** version of [GLM-4.6-REAP-218B-A32B](https://huggingface.co/cerebras/GLM-4.6-REAP-218B-A32B), created by **[Arli AI](https://www.arliai.com)**.
29
+
30
+ Our goal with this release is to provide a version of the model that removed refusal behaviors while maintaining the high-performance reasoning of the original GLM-4.5-Air. This is unlike regular abliteration which often inadvertently "lobotomizes" the model.
31
+
32
+ ### Methodology: Norm-Preserving Biprojected Abliteration
33
+
34
+ To achieve this, **[Arli AI](https://www.arliai.com)** utilized **Norm-Preserving Biprojected Abliteration**, a refined technique pioneered by Jim Lai (grimjim). You can read the full technical breakdown [in this article](https://huggingface.co/blog/grimjim/norm-preserving-biprojected-abliteration).
35
+
36
+ **Why this matters:**
37
+
38
+ Standard abliteration works by simply subtracting a "refusal vector" from the model's weights. While this works to uncensor a model, it is mathematically unprincipled. It alters the **magnitude** (or "loudness") of the neurons, destroying the delicate feature norms the model learned during training. This damage is why many uncensored models suffer from degraded logic or hallucinations.
39
+
40
+ **How Norm-Preserving Biprojected Abliteration fixes it:**
41
+
42
+ This model was modified using a three-step approach that removes refusals without breaking the model's brain:
43
+
44
+ 1. **Biprojection (Targeting):** We refined the refusal direction to ensure it is mathematically orthogonal to "harmless" directions. This ensures that when we cut out the refusal behavior, we do not accidentally cut out healthy, harmless concepts.
45
+ 2. **Decomposition:** Instead of a raw subtraction, we decomposed the model weights into **Magnitude** and **Direction**.
46
+ 3. **Norm-Preservation:** We removed the refusal component solely from the *directional* aspect of the weights, then recombined them with their **original magnitudes**.
47
+
48
+ **The Result:**
49
+
50
+ By preserving the weight norms, we maintain the "importance" structure of the neural network. Benchmarks suggest that this method avoids the "Safety Tax"—not only effectively removing refusals but potentially **improving reasoning capabilities** over the baseline, as the model is no longer wasting compute resources on suppressing its own outputs.
51
+
52
+ In fact, you may find surprising new knowledge and capabilities that the original model does not initially expose.
53
+
54
+ **Quantization:**
55
+
56
+ - Original: https://huggingface.co/ArliAI/GLM-4.6-REAP-218B-A32B-Derestricted
57
+ - FP8: https://huggingface.co/ArliAI/GLM-4.6-REAP-218B-A32B-Derestricted-FP8
58
+ - INT8: https://huggingface.co/ArliAI/GLM-4.6-REAP-218B-A32B-Derestricted-W8A8-INT8
59
+
60
+ ---
61
+
62
+ ## Original model card:
63
+
64
+ <p align="center">
65
+ <em>𓌳 <strong>REAP</strong>𓌳 the Experts: Why Pruning Prevails for One-Shot MoE Compression</em><br>
66
+ <img src="https://i.imgur.com/rmzG3gg.png" alt="REAP" width="75%">
67
+ </p>
68
+
69
+ # GLM-4.6-REAP-218B-A32B-FP8
70
+
71
+ ## ✨ Highlights
72
+
73
+ Introducing **GLM-4.6-REAP-218B-A32B-FP8**, a **memory-efficient compressed variant** of GLM-4.6-FP8 that maintains near-identical performance while being **40% lighter**.
74
+
75
+ **Note: this is a BF16 version for more accurate downstream low-bit quantization. An [FP8 version](https://huggingface.co/cerebras/GLM-4.6-REAP-218B-A32B-FP8) is also available on HF.**
76
+
77
+ This model was created using **REAP (Router-weighted Expert Activation Pruning)**, a novel expert pruning method that selectively removes redundant experts while preserving the router's independent control over remaining experts. Key features include:
78
+
79
+ - **Near-Lossless Performance**: Maintains almost identical accuracy on code generation, agentic coding, and function calling tasks compared to the full 355B model
80
+ - **40% Memory Reduction**: Compressed from 355B to 218B parameters, significantly lowering deployment costs and memory requirements
81
+ - **Preserved Capabilities**: Retains all core functionalities including code generation, agentic workflows, repository-scale understanding, and function calling
82
+ - **Drop-in Compatibility**: Works with vanilla vLLM - no source modifications or custom patches required
83
+ - **Optimized for Real-World Use**: Particularly effective for resource-constrained environments, local deployments, and academic research
84
+ ---
85
+ ## 📋 Model Overview
86
+
87
+ **GLM-4.6-REAP-218B-A32B-FP8** has the following specifications:
88
+
89
+ - **Base Model**: GLM-4.6-FP8
90
+ - **Compression Method**: REAP (Router-weighted Expert Activation Pruning)
91
+ - **Compression Ratio**: 40% expert pruning
92
+ - **Type**: Sparse Mixture-of-Experts (SMoE) Causal Language Model
93
+ - **Number of Parameters**: 218B total, 32B activated per token
94
+ - **Number of Layers**: 92
95
+ - **Number of Attention Heads (GQA)**: 96 for Q and 8 for KV
96
+ - **Number of Experts**: 96 (uniformly pruned from 160)
97
+ - **Number of Activated Experts**: 8 per token
98
+ - **Context Length**: 202,752 tokens
99
+ - **License**: MIT
100
+
101
+ ---
102
+
103
+ ## 📊 Evaluations
104
+
105
+ TBD for BF16 model. [Evalulation results available for the FP8 variant](https://huggingface.co/cerebras/GLM-4.6-REAP-218B-A32B-FP8#%F0%9F%93%8A-evaluations).
106
+
107
+ For more details on the evaluation setup, refer to the [REAP arXiv preprint](https://arxiv.org/abs/2510.13999).
108
+
109
+ ---
110
+
111
+ ## 🚀 Deployment
112
+
113
+ You can deploy the model directly using the **latest vLLM** (v0.11.0), no source modifications or custom patches required.
114
+
115
+ ```bash
116
+ vllm serve cerebras/GLM-4.6-REAP-218B-A32B-FP8 \
117
+ --tensor-parallel-size 8 \
118
+ --tool-call-parser glm45 \
119
+ --enable-auto-tool-choice \
120
+ --enable-expert-parallel
121
+ ```
122
+
123
+ If you encounter insufficient memory when running this model, you might need to set a lower value for `--max-num-seqs` flag (e.g. set to 64).
124
+
125
+
126
+ ## 🧩 Model Creation
127
+
128
+ This checkpoint was created by applying the **REAP (Router-weighted Expert Activation Pruning)** method uniformly across all Mixture-of-Experts (MoE) blocks of **GLM-4.6-FP8**, with a **40% pruning rate**.
129
+
130
+ ### How REAP Works
131
+
132
+ REAP selects experts to prune based on a novel **saliency criterion** that considers both:
133
+ - **Router gate values**: How frequently and strongly the router activates each expert
134
+ - **Expert activation norms**: The magnitude of each expert's output contributions
135
+
136
+ This dual consideration ensures that experts contributing minimally to the layer's output are pruned, while preserving those that play critical roles in the model's computations.
137
+
138
+ ### Key Advantages
139
+
140
+ - **One-Shot Compression**: No fine-tuning required after pruning - the model is immediately ready for deployment
141
+ - **Preserved Router Control**: Unlike expert merging methods, REAP maintains the router's independent, input-dependent control over remaining experts, avoiding "functional subspace collapse"
142
+ - **Generative Task Superiority**: REAP significantly outperforms expert merging approaches on generative benchmarks (code generation, creative writing, mathematical reasoning) while maintaining competitive performance on discriminative tasks
143
+
144
+ ### Calibration
145
+
146
+ The model was calibrated using a diverse mixture of domain-specific datasets including:
147
+ - Code generation samples ([evol-codealpaca](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1))
148
+ - Function calling examples ([xlam-function-calling](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k))
149
+ - Agentic multi-turn trajectories ([SWE-smith-trajectories](https://huggingface.co/datasets/SWE-bench/SWE-smith-trajectories))
150
+
151
+ 📚 For more details, refer to the following resources:
152
+
153
+ - [🧾 arXiv Preprint](https://arxiv.org/abs/2510.13999)
154
+ - [🧾 REAP Blog](https://www.cerebras.ai/blog/reap)
155
+ - [💻 REAP Codebase (GitHub)](https://github.com/CerebrasResearch/reap)
156
+
157
+ ---
158
+
159
+ ## ⚖️ License
160
+
161
+ This model is derived from
162
+ **[`zai-org/GLM-4.6-FP8`](https://huggingface.co/zai-org/GLM-4.6-FP8)**
163
+ and distributed under the **MIT license**.
164
+
165
+ ---
166
+
167
+ ## 🧾 Citation
168
+
169
+ If you use this checkpoint, please cite the REAP paper:
170
+
171
+ ```bibtex
172
+ @article{lasby-reap,
173
+ title={REAP the Experts: Why Pruning Prevails for One-Shot MoE compression},
174
+ author={Lasby, Mike and Lazarevich, Ivan and Sinnadurai, Nish and Lie, Sean and Ioannou, Yani and Thangarasa, Vithursan},
175
+ journal={arXiv preprint arXiv:2510.13999},
176
+ year={2025}
177
+ }
178
+ ```