Cant run with tp 4
Can't run with sglang PR docker container on 4 RTX 6000 Pro GPUs:
ValueError: The output_size of gate's and up's weight = 320 is not divisible by weight quantization block_n = 128.
Very disappointing for a model of this size for FP8
I already ran it with 4 x RTX 6000 Pro Blackwell.
You are wrong.
I already ran it with 4 x RTX 6000 Pro Blackwell.
You are wrong.
"Note: The FP8 version of Step-3.5-Flash cannot use TP4. You can try DP4 instead"
Ok.
vllm serve stepfun-ai/Step-3.5-Flash-FP8
--served-model-name Step-3.5-Flash
--tensor-parallel-size 4
--enable-expert-parallel
--disable-cascade-attn
--reasoning-parser step3p5
--enable-auto-tool-choice
--tool-call-parser step3p5
--hf-overrides '{"num_nextn_predict_layers": 1}'
--speculative_config '{"method": "step3p5_mtp", "num_speculative_tokens": 1}'
--trust-remote-code
I just reinstalled again... because there's a bug in the tool call parser which I try to fix.
I used this to install vllm:
pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
Python 3.10
Ok, I changed my ImagePullPolcy to Always and forced the fresh nightly pull. Works now
I just reinstalled again... because there's a bug in the tool call parser which I try to fix.
I used this to install vllm:
pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
Python 3.10
Where you successful in fixing it? Can you share the change?
No, because Qwen-Coder-Next came out and its performance was so good that I didn't care about Step-3.5-Flash anymore. 😅
I just reinstalled again... because there's a bug in the tool call parser which I try to fix.
I used this to install vllm:
pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
Python 3.10Where you successful in fixing it? Can you share the change?
Enable expert parallel flag resolved this error for me



