mradermacher/Qwen3.6-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled-i1-GGUF Image-Text-to-Text • 35B • Updated 1 day ago • 6.69k • 1
view post Post 7732 We made a guide on how to run open LLMs in Claude Code, Codex and OpenClaw.Use Gemma 4 and Qwen3.6 GGUFs for local agentic coding on 24GB RAMRun with self-healing tool calls, code execution, web search via the Unsloth API endpoint and llama.cppGuide: https://unsloth.ai/docs/basics/api See translation 🔥 25 25 ❤️ 6 6 + Reply