Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
AaryanK
/
GLM-4.7-Flash-GGUF
like
7
Text Generation
GGUF
English
Chinese
text-generation-inference
glm
Mixture of Experts
flash
glm4_moe_lite
conversational
License:
mit
Model card
Files
Files and versions
xet
Community
1
Deploy
Use this model
main
GLM-4.7-Flash-GGUF
267 GB
1 contributor
History:
17 commits
AaryanK
Uploads Complete
2caccda
verified
28 days ago
.gitattributes
2.37 kB
Upload GLM-4.7-Flash.q8_0.gguf with huggingface_hub
28 days ago
GLM-4.7-Flash.q2_k.gguf
11 GB
xet
Upload GLM-4.7-Flash.q2_k.gguf with huggingface_hub
28 days ago
GLM-4.7-Flash.q3_k_l.gguf
15.6 GB
xet
Upload GLM-4.7-Flash.q3_k_l.gguf with huggingface_hub
28 days ago
GLM-4.7-Flash.q3_k_m.gguf
14.4 GB
xet
Upload GLM-4.7-Flash.q3_k_m.gguf with huggingface_hub
28 days ago
GLM-4.7-Flash.q3_k_s.gguf
13 GB
xet
Upload GLM-4.7-Flash.q3_k_s.gguf with huggingface_hub
28 days ago
GLM-4.7-Flash.q4_0.gguf
17 GB
xet
Upload GLM-4.7-Flash.q4_0.gguf with huggingface_hub
28 days ago
GLM-4.7-Flash.q4_1.gguf
18.8 GB
xet
Upload GLM-4.7-Flash.q4_1.gguf with huggingface_hub
28 days ago
GLM-4.7-Flash.q4_k_m.gguf
18.1 GB
xet
Upload GLM-4.7-Flash.q4_k_m.gguf with huggingface_hub
28 days ago
GLM-4.7-Flash.q4_k_s.gguf
17.1 GB
xet
Upload GLM-4.7-Flash.q4_k_s.gguf with huggingface_hub
28 days ago
GLM-4.7-Flash.q5_0.gguf
20.7 GB
xet
Upload GLM-4.7-Flash.q5_0.gguf with huggingface_hub
28 days ago
GLM-4.7-Flash.q5_1.gguf
22.5 GB
xet
Upload GLM-4.7-Flash.q5_1.gguf with huggingface_hub
28 days ago
GLM-4.7-Flash.q5_k_m.gguf
21.3 GB
xet
Upload GLM-4.7-Flash.q5_k_m.gguf with huggingface_hub
28 days ago
GLM-4.7-Flash.q5_k_s.gguf
20.7 GB
xet
Upload GLM-4.7-Flash.q5_k_s.gguf with huggingface_hub
28 days ago
GLM-4.7-Flash.q6_k.gguf
24.6 GB
xet
Upload GLM-4.7-Flash.q6_k.gguf with huggingface_hub
28 days ago
GLM-4.7-Flash.q8_0.gguf
31.8 GB
xet
Upload GLM-4.7-Flash.q8_0.gguf with huggingface_hub
28 days ago
README.md
2.22 kB
Uploads Complete
28 days ago