Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -22,7 +22,7 @@ pipeline_tag: text-generation
|
|
| 22 |
<img src="https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/logo.svg" width="20%"/>
|
| 23 |
</div>
|
| 24 |
|
| 25 |
-
# ELBAZ GLM-4.7 PRISM
|
| 26 |
(UNCENSORED)
|
| 27 |
|
| 28 |
**GLM-4.7: Your New Coding Partner - Now Unrestricted**
|
|
@@ -65,8 +65,16 @@ This project exists as **research and development experimentation** into underst
|
|
| 65 |
```
|
| 66 |
zai-org/GLM-4.7 (Base Model - BF16)
|
| 67 |
└── Ex0bit/GLM-4.7-PRISM (This Model)
|
|
|
|
| 68 |
```
|
| 69 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 70 |
## Prompt Format
|
| 71 |
|
| 72 |
This model uses the GLM chat format with thinking/reasoning support:
|
|
@@ -98,7 +106,7 @@ This model uses the GLM chat format with thinking/reasoning support:
|
|
| 98 |
| `<|system|>` | 151335 | System prompt marker |
|
| 99 |
| `<|user|>` | 151336 | User message marker |
|
| 100 |
| `<|assistant|>` | 151337 | Assistant response marker |
|
| 101 |
-
| `<|observation|>` |
|
| 102 |
| `<think>` | 151350 | Reasoning block start |
|
| 103 |
| `</think>` | 151351 | Reasoning block end |
|
| 104 |
| `<|endoftext|>` | 151329 | End of sequence |
|
|
@@ -173,34 +181,150 @@ python3 -m sglang.launch_server \
|
|
| 173 |
--port 8000
|
| 174 |
```
|
| 175 |
|
| 176 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 177 |
|
| 178 |
> **Important:** You **must** use `--jinja` flag for correct chat template handling!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 179 |
```bash
|
| 180 |
-
./llama
|
| 181 |
-
|
| 182 |
--jinja \
|
| 183 |
-
|
| 184 |
-
--
|
| 185 |
-
--
|
| 186 |
-
|
| 187 |
-
|
| 188 |
-
-
|
|
|
|
| 189 |
```
|
| 190 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 191 |
### Using llama-server (OpenAI-compatible API)
|
| 192 |
```bash
|
| 193 |
-
./llama
|
| 194 |
-
|
| 195 |
-
--alias "
|
| 196 |
--threads -1 \
|
| 197 |
-
|
| 198 |
-ot ".ffn_.*_exps.=CPU" \
|
| 199 |
-
--temp
|
| 200 |
-
--top-p 0.
|
| 201 |
-
|
| 202 |
--port 8001 \
|
| 203 |
-
--jinja
|
|
|
|
| 204 |
```
|
| 205 |
|
| 206 |
Then use with OpenAI's Python library:
|
|
@@ -212,12 +336,26 @@ openai_client = OpenAI(
|
|
| 212 |
api_key = "sk-no-key-required",
|
| 213 |
)
|
| 214 |
completion = openai_client.chat.completions.create(
|
| 215 |
-
model = "
|
| 216 |
messages = [{"role": "user", "content": "What is 2+2?"}],
|
| 217 |
)
|
| 218 |
print(completion.choices[0].message.content)
|
| 219 |
```
|
| 220 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 221 |
## Thinking Mode Configuration
|
| 222 |
|
| 223 |
GLM-4.7 supports **Interleaved Thinking**, **Preserved Thinking**, and **Turn-level Thinking**.
|
|
@@ -267,11 +405,6 @@ extra_body={"chat_template_kwargs": {"enable_thinking": False}}
|
|
| 267 |
|
| 268 |
The model was abliterated using **PRISM** - a state-of-the-art abliteration methodology combining multiple principled techniques for effective refusal removal while preserving model capabilities.
|
| 269 |
|
| 270 |
-
|
| 271 |
-
### MoE Offloading Tips (llama.cpp)
|
| 272 |
-
|
| 273 |
-
Use `-ot ".ffn_.*_exps.=CPU"` to offload all MoE layers to CPU, fitting non-MoE layers on GPU for improved speed.
|
| 274 |
-
|
| 275 |
## Ethical Considerations
|
| 276 |
|
| 277 |
This model has been modified to reduce safety guardrails. Users are responsible for:
|
|
@@ -299,13 +432,13 @@ MIT (same as base model [zai-org/GLM-4.7](https://huggingface.co/zai-org/GLM-4.7
|
|
| 299 |
### Original Model Citation
|
| 300 |
```bibtex
|
| 301 |
@misc{5team2025glm45agenticreasoningcoding,
|
| 302 |
-
title={GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models},
|
| 303 |
-
author={GLM Team and Aohan Zeng and Xin Lv and Qinkai Zheng and Zhenyu Hou and Bin Chen and Chengxing Xie and Cunxiang Wang and Da Yin and Hao Zeng and Jiajie Zhang and Kedong Wang and Lucen Zhong and Mingdao Liu and Rui Lu and Shulin Cao and Xiaohan Zhang and Xuancheng Huang and Yao Wei and Yean Cheng and Yifan An and Yilin Niu and Yuanhao Wen and Yushi Bai and Zhengxiao Du and Zihan Wang and Zilin Zhu and Bohan Zhang and Bosi Wen and Bowen Wu and Bowen Xu and Can Huang and Casey Zhao and Changpeng Cai and Chao Yu and Chen Li and Chendi Ge and Chenghua Huang and Chenhui Zhang and Chenxi Xu and Chenzheng Zhu and Chuang Li and Congfeng Yin and Daoyan Lin and Dayong Yang and Dazhi Jiang and Ding Ai and Erle Zhu and Fei Wang and Gengzheng Pan and Guo Wang and Hailong Sun and Haitao Li and Haiyang Li and Haiyi Hu and Hanyu Zhang and Hao Peng and Hao Tai and Haoke Zhang and Haoran Wang and Haoyu Yang and He Liu and He Zhao and Hongwei Liu and Hongxi Yan and Huan Liu and Huilong Chen and Ji Li and Jiajing Zhao and Jiamin Ren and Jian Jiao and Jiani Zhao and Jianyang Yan and Jiaqi Wang and Jiayi Gui and Jiayue Zhao and Jie Liu and Jijie Li and Jing Li and Jing Lu and Jingsen Wang and Jingwei Yuan and Jingxuan Li and Jingzhao Du and Jinhua Du and Jinxin Liu and Junkai Zhi and Junli Gao and Ke Wang and Lekang Yang and Liang Xu and Lin Fan and Lindong Wu and Lintao Ding and Lu Wang and Man Zhang and Minghao Li and Minghuan Xu and Mingming Zhao and Mingshu Zhai and Pengfan Du and Qian Dong and Shangde Lei and Shangqing Tu and Shangtong Yang and Shaoyou Lu and Shijie Li and Shuang Li and Shuang-Li and Shuxun Yang and Sibo Yi and Tianshu Yu and Wei Tian and Weihan Wang and Wenbo Yu and Weng Lam Tam and Wenjie Liang and Wentao Liu and Xiao Wang and Xiaohan Jia and Xiaotao Gu and Xiaoying Ling and Xin Wang and Xing Fan and Xingru Pan and Xinyuan Zhang and Xinze Zhang and Xiuqing Fu and Xunkai Zhang and Yabo Xu and Yandong Wu and Yida Lu and Yidong Wang and Yilin Zhou and Yiming Pan and Ying Zhang and Yingli Wang and
|
| 304 |
year={2025},
|
| 305 |
eprint={2508.06471},
|
| 306 |
archivePrefix={arXiv},
|
| 307 |
primaryClass={cs.CL},
|
| 308 |
-
url={https://arxiv.org/abs/2508.06471},
|
| 309 |
}
|
| 310 |
```
|
| 311 |
|
|
@@ -313,16 +446,18 @@ MIT (same as base model [zai-org/GLM-4.7](https://huggingface.co/zai-org/GLM-4.7
|
|
| 313 |
|
| 314 |
* [ZhipuAI](https://www.zhipuai.cn/) for GLM-4.7
|
| 315 |
* [llama.cpp](https://github.com/ggerganov/llama.cpp) for quantization tools
|
| 316 |
-
* [
|
| 317 |
-
*
|
|
|
|
| 318 |
|
| 319 |
## Related Models
|
| 320 |
|
| 321 |
* [zai-org/GLM-4.7](https://huggingface.co/zai-org/GLM-4.7) - Base model
|
| 322 |
* [zai-org/GLM-4.7-FP8](https://huggingface.co/zai-org/GLM-4.7-FP8) - FP8 quantized version
|
| 323 |
* [unsloth/GLM-4.7-GGUF](https://huggingface.co/unsloth/GLM-4.7-GGUF) - GGUF quantizations
|
|
|
|
| 324 |
* [Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM](https://huggingface.co/Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM) - GLM-4.6V-Flash abliterated
|
| 325 |
|
| 326 |
---
|
| 327 |
|
| 328 |
-
**Created by: Ex0bit (Eric Elbaz)**
|
|
|
|
| 22 |
<img src="https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/logo.svg" width="20%"/>
|
| 23 |
</div>
|
| 24 |
|
| 25 |
+
# ELBAZ GLM-4.7 PRISM
|
| 26 |
(UNCENSORED)
|
| 27 |
|
| 28 |
**GLM-4.7: Your New Coding Partner - Now Unrestricted**
|
|
|
|
| 65 |
```
|
| 66 |
zai-org/GLM-4.7 (Base Model - BF16)
|
| 67 |
└── Ex0bit/GLM-4.7-PRISM (This Model)
|
| 68 |
+
└── Ex0bit/GLM-4.7-PRISM-IQ1_S-GGUF (IQ1_S Quantization ~67GB)
|
| 69 |
```
|
| 70 |
|
| 71 |
+
## Available Quantizations
|
| 72 |
+
|
| 73 |
+
| Quant | Size | RAM Required | Link |
|
| 74 |
+
|-------|------|--------------|------|
|
| 75 |
+
| BF16 | ~717GB | 750GB+ | This repo |
|
| 76 |
+
| IQ1_S | ~67GB | 128GB | [GLM-4.7-PRISM-IQ1_S-GGUF](https://huggingface.co/Ex0bit/GLM-4.7-PRISM-IQ1_S-GGUF) |
|
| 77 |
+
|
| 78 |
## Prompt Format
|
| 79 |
|
| 80 |
This model uses the GLM chat format with thinking/reasoning support:
|
|
|
|
| 106 |
| `<|system|>` | 151335 | System prompt marker |
|
| 107 |
| `<|user|>` | 151336 | User message marker |
|
| 108 |
| `<|assistant|>` | 151337 | Assistant response marker |
|
| 109 |
+
| `<|observation|>` | 151338 | Tool response marker |
|
| 110 |
| `<think>` | 151350 | Reasoning block start |
|
| 111 |
| `</think>` | 151351 | Reasoning block end |
|
| 112 |
| `<|endoftext|>` | 151329 | End of sequence |
|
|
|
|
| 181 |
--port 8000
|
| 182 |
```
|
| 183 |
|
| 184 |
+
---
|
| 185 |
+
|
| 186 |
+
## Using with Ollama (GGUF)
|
| 187 |
+
|
| 188 |
+
### Quick Start
|
| 189 |
+
|
| 190 |
+
```bash
|
| 191 |
+
# Download the IQ1_S GGUF file (~67GB)
|
| 192 |
+
huggingface-cli download Ex0bit/GLM-4.7-PRISM-IQ1_S-GGUF \
|
| 193 |
+
GLM-4.7-PRISM-IQ1_S-patched.gguf \
|
| 194 |
+
--local-dir .
|
| 195 |
+
|
| 196 |
+
# Create Modelfile
|
| 197 |
+
cat > Modelfile << 'EOF'
|
| 198 |
+
FROM ./GLM-4.7-PRISM-IQ1_S-patched.gguf
|
| 199 |
+
|
| 200 |
+
# Generation parameters
|
| 201 |
+
PARAMETER temperature 0.7
|
| 202 |
+
PARAMETER top_p 0.9
|
| 203 |
+
PARAMETER num_ctx 4096
|
| 204 |
+
PARAMETER num_gpu 99
|
| 205 |
+
|
| 206 |
+
# Stop tokens - CRITICAL to prevent infinite generation
|
| 207 |
+
PARAMETER stop "<|endoftext|>"
|
| 208 |
+
PARAMETER stop "<|user|>"
|
| 209 |
+
PARAMETER stop "<|observation|>"
|
| 210 |
+
|
| 211 |
+
# Chat template matching GLM-4.7 format
|
| 212 |
+
# Note: Starts generation with <think> to enable thinking mode
|
| 213 |
+
TEMPLATE """[gMASK]<sop>{{- if .System }}<|system|>
|
| 214 |
+
{{ .System }}{{- end }}<|user|>
|
| 215 |
+
{{ .Prompt }}<|assistant|>
|
| 216 |
+
<think>"""
|
| 217 |
+
|
| 218 |
+
SYSTEM """You are a helpful AI assistant. Do not get stuck in thinking loops. Be concise and succinct."""
|
| 219 |
+
EOF
|
| 220 |
+
|
| 221 |
+
# Create and run
|
| 222 |
+
ollama create glm47-prism -f Modelfile
|
| 223 |
+
ollama run glm47-prism
|
| 224 |
+
```
|
| 225 |
+
|
| 226 |
+
### Required Stop Tokens
|
| 227 |
+
|
| 228 |
+
The following stop tokens are **required** in Ollama to prevent infinite generation loops:
|
| 229 |
+
|
| 230 |
+
| Token | ID | Purpose |
|
| 231 |
+
|-------|-----|---------|
|
| 232 |
+
| `<\|endoftext\|>` | 151329 | End of generation |
|
| 233 |
+
| `<\|user\|>` | 151336 | Prevents generating user turns |
|
| 234 |
+
| `<\|observation\|>` | 151338 | Tool/observation boundary |
|
| 235 |
+
|
| 236 |
+
### Thinking Mode
|
| 237 |
+
|
| 238 |
+
The template includes `<think>` at the end to trigger the model's built-in thinking/reasoning mode. The model will output its reasoning between `<think>` and `</think>` tags before providing the final answer.
|
| 239 |
+
|
| 240 |
+
---
|
| 241 |
+
|
| 242 |
+
## Using with llama.cpp
|
| 243 |
|
| 244 |
> **Important:** You **must** use `--jinja` flag for correct chat template handling!
|
| 245 |
+
|
| 246 |
+
### Basic Inference
|
| 247 |
+
```bash
|
| 248 |
+
./llama-cli \
|
| 249 |
+
-m GLM-4.7-PRISM-IQ1_S-patched.gguf \
|
| 250 |
+
--jinja \
|
| 251 |
+
-c 16384 \
|
| 252 |
+
-n 2048 \
|
| 253 |
+
--temp 0.7 \
|
| 254 |
+
--top-p 0.9 \
|
| 255 |
+
-ngl 99 \
|
| 256 |
+
-p "Hello, please introduce yourself."
|
| 257 |
+
```
|
| 258 |
+
|
| 259 |
+
### With System Prompt
|
| 260 |
+
```bash
|
| 261 |
+
./llama-cli \
|
| 262 |
+
-m GLM-4.7-PRISM-IQ1_S-patched.gguf \
|
| 263 |
+
--jinja \
|
| 264 |
+
-c 16384 \
|
| 265 |
+
-n 2048 \
|
| 266 |
+
--temp 0.7 \
|
| 267 |
+
--top-p 0.9 \
|
| 268 |
+
-ngl 99 \
|
| 269 |
+
--system-prompt "You are a helpful AI assistant. Do not get stuck in thinking loops. Be concise and succinct." \
|
| 270 |
+
-p "Explain quantum entanglement in simple terms."
|
| 271 |
+
```
|
| 272 |
+
|
| 273 |
+
### Interactive Chat Mode
|
| 274 |
+
```bash
|
| 275 |
+
./llama-cli \
|
| 276 |
+
-m GLM-4.7-PRISM-IQ1_S-patched.gguf \
|
| 277 |
+
--jinja \
|
| 278 |
+
-c 16384 \
|
| 279 |
+
--temp 0.7 \
|
| 280 |
+
--top-p 0.9 \
|
| 281 |
+
-ngl 99 \
|
| 282 |
+
--system-prompt "You are a helpful AI assistant. Do not get stuck in thinking loops. Be concise and succinct." \
|
| 283 |
+
-cnv
|
| 284 |
+
```
|
| 285 |
+
|
| 286 |
+
### MoE CPU Offload (for limited VRAM)
|
| 287 |
+
|
| 288 |
+
If you have limited VRAM but sufficient RAM, offload MoE expert layers to CPU:
|
| 289 |
+
|
| 290 |
```bash
|
| 291 |
+
./llama-cli \
|
| 292 |
+
-m GLM-4.7-PRISM-IQ1_S-patched.gguf \
|
| 293 |
--jinja \
|
| 294 |
+
-c 8192 \
|
| 295 |
+
--temp 0.7 \
|
| 296 |
+
--top-p 0.9 \
|
| 297 |
+
-ngl 99 \
|
| 298 |
+
-ot ".ffn_.*_exps.=CPU" \
|
| 299 |
+
--system-prompt "You are a helpful AI assistant. Do not get stuck in thinking loops. Be concise and succinct." \
|
| 300 |
+
-cnv
|
| 301 |
```
|
| 302 |
|
| 303 |
+
### Key Flags
|
| 304 |
+
|
| 305 |
+
| Flag | Purpose |
|
| 306 |
+
|------|---------|
|
| 307 |
+
| `--jinja` | **Required** - Uses the embedded Jinja2 chat template |
|
| 308 |
+
| `-ngl 99` | Offload all layers to GPU |
|
| 309 |
+
| `-c 16384` | Context size (max 131072, adjust based on RAM) |
|
| 310 |
+
| `--system-prompt` | Set the system prompt |
|
| 311 |
+
| `-cnv` | Interactive conversation mode |
|
| 312 |
+
| `-ot ".ffn_.*_exps.=CPU"` | Offload MoE experts to CPU |
|
| 313 |
+
|
| 314 |
### Using llama-server (OpenAI-compatible API)
|
| 315 |
```bash
|
| 316 |
+
./llama-server \
|
| 317 |
+
-m GLM-4.7-PRISM-IQ1_S-patched.gguf \
|
| 318 |
+
--alias "glm47-prism" \
|
| 319 |
--threads -1 \
|
| 320 |
+
-ngl 99 \
|
| 321 |
-ot ".ffn_.*_exps.=CPU" \
|
| 322 |
+
--temp 0.7 \
|
| 323 |
+
--top-p 0.9 \
|
| 324 |
+
-c 16384 \
|
| 325 |
--port 8001 \
|
| 326 |
+
--jinja \
|
| 327 |
+
--system-prompt "You are a helpful AI assistant. Do not get stuck in thinking loops. Be concise and succinct."
|
| 328 |
```
|
| 329 |
|
| 330 |
Then use with OpenAI's Python library:
|
|
|
|
| 336 |
api_key = "sk-no-key-required",
|
| 337 |
)
|
| 338 |
completion = openai_client.chat.completions.create(
|
| 339 |
+
model = "glm47-prism",
|
| 340 |
messages = [{"role": "user", "content": "What is 2+2?"}],
|
| 341 |
)
|
| 342 |
print(completion.choices[0].message.content)
|
| 343 |
```
|
| 344 |
|
| 345 |
+
---
|
| 346 |
+
|
| 347 |
+
## Hardware Requirements
|
| 348 |
+
|
| 349 |
+
| Configuration | Performance | Notes |
|
| 350 |
+
|--------------|-------------|-------|
|
| 351 |
+
| 128GB RAM (CPU only) | ~1-3 tok/s | All layers on CPU |
|
| 352 |
+
| 128GB RAM + 24GB VRAM | ~4-7 tok/s | MoE experts on CPU |
|
| 353 |
+
| 128GB RAM + 48GB+ VRAM | ~8-15 tok/s | Most layers on GPU |
|
| 354 |
+
|
| 355 |
+
**Memory Usage**: ~67GB for IQ1_S model + ~1GB per 1K context tokens
|
| 356 |
+
|
| 357 |
+
---
|
| 358 |
+
|
| 359 |
## Thinking Mode Configuration
|
| 360 |
|
| 361 |
GLM-4.7 supports **Interleaved Thinking**, **Preserved Thinking**, and **Turn-level Thinking**.
|
|
|
|
| 405 |
|
| 406 |
The model was abliterated using **PRISM** - a state-of-the-art abliteration methodology combining multiple principled techniques for effective refusal removal while preserving model capabilities.
|
| 407 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 408 |
## Ethical Considerations
|
| 409 |
|
| 410 |
This model has been modified to reduce safety guardrails. Users are responsible for:
|
|
|
|
| 432 |
### Original Model Citation
|
| 433 |
```bibtex
|
| 434 |
@misc{5team2025glm45agenticreasoningcoding,
|
| 435 |
+
title={GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models},
|
| 436 |
+
author={GLM Team and Aohan Zeng and Xin Lv and Qinkai Zheng and Zhenyu Hou and Bin Chen and Chengxing Xie and Cunxiang Wang and Da Yin and Hao Zeng and Jiajie Zhang and Kedong Wang and Lucen Zhong and Mingdao Liu and Rui Lu and Shulin Cao and Xiaohan Zhang and Xuancheng Huang and Yao Wei and Yean Cheng and Yifan An and Yilin Niu and Yuanhao Wen and Yushi Bai and Zhengxiao Du and Zihan Wang and Zilin Zhu and Bohan Zhang and Bosi Wen and Bowen Wu and Bowen Xu and Can Huang and Casey Zhao and Changpeng Cai and Chao Yu and Chen Li and Chendi Ge and Chenghua Huang and Chenhui Zhang and Chenxi Xu and Chenzheng Zhu and Chongjing Wei and Chuang Li and Congfeng Yin and Daoyan Lin and Dayong Yang and Dazhi Jiang and Ding Ai and Erle Zhu and Fei Wang and Gengzheng Pan and Guo Wang and Hailong Sun and Haitao Li and Haiyang Li and Haiyi Hu and Hanyu Zhang and Hao Peng and Hao Tai and Haoke Zhang and Haoran Wang and Haoyu Yang and He Liu and He Zhao and Hongwei Liu and Hongxi Yan and Huan Liu and Huilong Chen and Ji Li and Jiajing Zhao and Jiamin Ren and Jian Jiao and Jiani Zhao and Jianyang Yan and Jiaqi Wang and Jiayi Gui and Jiayue Zhao and Jie Liu and Jijie Li and Jing Li and Jing Lu and Jingsen Wang and Jingwei Yuan and Jingxuan Li and Jingzhao Du and Jinhua Du and Jinxin Liu and Junkai Zhi and Junli Gao and Ke Wang and Lekang Yang and Liang Xu and Lin Fan and Lindong Wu and Lintao Ding and Lu Wang and Man Zhang and Minghao Li and Minghuan Xu and Mingming Zhao and Mingshu Zhai and Pengfan Du and Qian Dong and Shangde Lei and Shangqing Tu and Shangtong Yang and Shaoyou Lu and Shijie Li and Shuang Li and Shuang-Li and Shuxun Yang and Sibo Yi and Tianshu Yu and Wei Tian and Weihan Wang and Wenbo Yu and Weng Lam Tam and Wenjie Liang and Wentao Liu and Xiao Wang and Xiaohan Jia and Xiaotao Gu and Xiaoying Ling and Xin Wang and Xing Fan and Xingru Pan and Xinyuan Zhang and Xinze Zhang and Xiuqing Fu and Xunkai Zhang and Yabo Xu and Yandong Wu and Yida Lu and Yidong Wang and Yilin Zhou and Yiming Pan and Ying Zhang and Yingli Wang and Yinpei Li and Yinpei Su and Yipeng Geng and Yitong Zhu and Yongkun Yang and Yuhang Li and Yuhao Wu and Yujiang Li and Yunan Liu and Yunqing Wang and Yuntao Li and Yuxuan Zhang and Zezhen Liu and Zhen Yang and Zhengda Zhou and Zhongpei Qiao and Zhuoer Feng and Zhuorui Liu and Zichen Zhang and Zihan Wang and Zijun Yao and Zikang Wang and Ziqiang Liu and Ziwei Chai and Zixuan Li and Zuodong Zhao and Wenguang Chen and Jidong Zhai and Bin Xu and Minlie Huang and Hongning Wang and Juanzi Li and Yuxiao Dong and Jie Tang},
|
| 437 |
year={2025},
|
| 438 |
eprint={2508.06471},
|
| 439 |
archivePrefix={arXiv},
|
| 440 |
primaryClass={cs.CL},
|
| 441 |
+
url={https://arxiv.org/abs/2508.06471},
|
| 442 |
}
|
| 443 |
```
|
| 444 |
|
|
|
|
| 446 |
|
| 447 |
* [ZhipuAI](https://www.zhipuai.cn/) for GLM-4.7
|
| 448 |
* [llama.cpp](https://github.com/ggerganov/llama.cpp) for quantization tools
|
| 449 |
+
* [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp) for improved quantization
|
| 450 |
+
* [Unsloth](https://unsloth.ai/) for GGUF guides and imatrix calibration data
|
| 451 |
+
* The z.ai GLM Team for the outstanding foundation model
|
| 452 |
|
| 453 |
## Related Models
|
| 454 |
|
| 455 |
* [zai-org/GLM-4.7](https://huggingface.co/zai-org/GLM-4.7) - Base model
|
| 456 |
* [zai-org/GLM-4.7-FP8](https://huggingface.co/zai-org/GLM-4.7-FP8) - FP8 quantized version
|
| 457 |
* [unsloth/GLM-4.7-GGUF](https://huggingface.co/unsloth/GLM-4.7-GGUF) - GGUF quantizations
|
| 458 |
+
* [Ex0bit/GLM-4.7-PRISM-IQ1_S-GGUF](https://huggingface.co/Ex0bit/GLM-4.7-PRISM-IQ1_S-GGUF) - IQ1_S quantization (~67GB)
|
| 459 |
* [Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM](https://huggingface.co/Ex0bit/Elbaz-GLM-4.6V-Flash-PRISM) - GLM-4.6V-Flash abliterated
|
| 460 |
|
| 461 |
---
|
| 462 |
|
| 463 |
+
**Created by: Ex0bit (Eric Elbaz)**
|