balu
balukumar
AI & ML interests
None yet
Recent Activity
reacted
to
danielhanchen's
post
with ๐ฅ
about 3 hours ago
Run GLM-4.7-Flash locally on your device with 24GB RAM!๐ฅ
It's the best performing 30B model on SWE-Bench and GPQA. With 200K context, it excels at coding, agents, chat & reasoning.
GGUF: https://huggingface.co/unsloth/GLM-4.7-Flash-GGUF
Guide: https://unsloth.ai/docs/models/glm-4.7-flash
liked
a model
about 3 hours ago
fal/flux-2-klein-4B-background-remove-lora
liked
a model
about 3 hours ago
fal/flux-2-klein-4B-zoom-lora
Organizations
None yet