Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
ubergarm
/
MiniMax-M2.5-GGUF
like
36
Text Generation
GGUF
imatrix
conversational
minimax_m2
ik_llama.cpp
Model card
Files
Files and versions
xet
Community
12
Deploy
Use this model
main
MiniMax-M2.5-GGUF
839 GB
1 contributor
History:
34 commits
ubergarm
As requested, add UD-IQ3_XXS to perplexity chart
6e9d9db
about 12 hours ago
IQ2_KS
Upload folder using huggingface_hub
3 days ago
IQ4_NL
Upload folder using huggingface_hub
2 days ago
IQ4_XS
Upload folder using huggingface_hub
3 days ago
IQ5_K
Upload folder using huggingface_hub
3 days ago
images
As requested, add UD-IQ3_XXS to perplexity chart
about 12 hours ago
logs
update perplexity logs with exact command
3 days ago
mainline-IQ4_NL
Upload folder using huggingface_hub
2 days ago
smol-IQ3_KS
Upload folder using huggingface_hub
3 days ago
smol-IQ4_KSS
Upload folder using huggingface_hub
2 days ago
.gitattributes
Safe
1.65 kB
initial commit
3 days ago
README.md
12 kB
add link to AesSedai/MiniMax-M2.5-GGUF
about 14 hours ago
imatrix-MiniMax-M2.5-BF16.dat
492 MB
xet
Upload imatrix-MiniMax-M2.5-BF16.dat with huggingface_hub
3 days ago