Text Generation
Transformers
Safetensors
step3p5
conversational
custom_code
fp8

Cant run with tp 4

#5
by darkstar3537 - opened

Can't run with sglang PR docker container on 4 RTX 6000 Pro GPUs:

ValueError: The output_size of gate's and up's weight = 320 is not divisible by weight quantization block_n = 128.

Very disappointing for a model of this size for FP8

I already ran it with 4 x RTX 6000 Pro Blackwell.

You are wrong.

I already ran it with 4 x RTX 6000 Pro Blackwell.

You are wrong.

"Note: The FP8 version of Step-3.5-Flash cannot use TP4. You can try DP4 instead"

Ok.

CleanShot 2026-02-03 at 22.46.43@2x

CleanShot 2026-02-03 at 22.48.00@2x

CleanShot 2026-02-03 at 22.48.16@2x

vllm serve stepfun-ai/Step-3.5-Flash-FP8
--served-model-name Step-3.5-Flash
--tensor-parallel-size 4
--enable-expert-parallel
--disable-cascade-attn
--reasoning-parser step3p5
--enable-auto-tool-choice
--tool-call-parser step3p5
--hf-overrides '{"num_nextn_predict_layers": 1}'
--speculative_config '{"method": "step3p5_mtp", "num_speculative_tokens": 1}'
--trust-remote-code

Shrug, doesn't work for me w/nightly and docs say it's not supported
image

I just reinstalled again... because there's a bug in the tool call parser which I try to fix.

I used this to install vllm:

pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
Python 3.10

Ok, I changed my ImagePullPolcy to Always and forced the fresh nightly pull. Works now

darkstar3537 changed discussion status to closed

I just reinstalled again... because there's a bug in the tool call parser which I try to fix.

I used this to install vllm:

pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
Python 3.10

Where you successful in fixing it? Can you share the change?

No, because Qwen-Coder-Next came out and its performance was so good that I didn't care about Step-3.5-Flash anymore. 😅

I just reinstalled again... because there's a bug in the tool call parser which I try to fix.

I used this to install vllm:

pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
Python 3.10

Where you successful in fixing it? Can you share the change?

Enable expert parallel flag resolved this error for me

Sign up or log in to comment