Inquiry About Unsloth Dynamic 1.8-bit Qwen3-Coder-Next-UD-TQ1_0.gguf for vLLM Deployment
#6
by
BuiDoan
- opened
I recently came across the model Unsloth Dynamic 1.8-bit Qwen3-Coder-Next-UD-TQ1_0.gguf, and its size fits well within the VRAM capacity of my graphics card. However, I’m unsure about its actual performance in terms of inference speed and output quality.
Could you share your experience with this specific quantized version?
Is it suitable for deployment with vLLM?
If you’ve already deployed it, I’d greatly appreciate any insights on stability, latency, and overall effectiveness in real-world use cases.
It works on vLLM but probably not as optimized for GGUFs vs just using llama-server