Endless Repetition? Anyone encountered?
My model repeats for ever the same text block, even for bf16. I am using ollama with open-webui.
You should set the temperature to 1. Also, since you can use BF16, you might want to try sglang first?
GOODE
You should set the temperature to 1. Also, since you can use BF16, you might want to try sglang first?
Thank you for your reply.
However, neither sglang nor vllm worked for me, even followed the exact steps to install dependencies. And ollama TTFT is very slow and easily eats up all resources.
I think something didn't work well, either software-support or dependencies.
export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:False
CUDA_VISIBLE_DEVICES='0,1,2,3' vllm servez ai-org/GLM-4.7-Flash \
--served-model-name GLM-4.7-Flash \
--tensor-parallel-size 4 \
--tool-call-parser glm47 \
--reasoning-parser glm45 \
--enable-auto-tool-choice \
--dtype bfloat16 \
--seed 3407 \
--max-model-len 200000 \
--gpu-memory-utilization 0.95 \
--max_num_batched_tokens 16384 \
--port 8000 \
--kv-cache-dtype fp8
Description:
When serving zai-org/GLM-4.7-Flash using the vLLM V1 engine on NVIDIA H200, enabling FP8 KV cache results in a complete failure of inference logic. The model enters an infinite repetition loop (e.g., outputting !!!!!!!!!! or repeating the same word indefinitely).
This appears to be a numerical stability issue specific to the interaction between bf16 weights, FP8 KV cache, and the FlashMLA implementation in the V1 engine.
GOODE