Qwen3-Coder-30B-A3B-DFlash

Paper | GitHub | Blog

DFlash is a novel speculative decoding method that utilizes a lightweight block diffusion model for drafting. It enables efficient, high-quality parallel drafting that pushes the limits of inference speed.

This model is the drafter component. It must be used in conjunction with the target model Qwen/Qwen3-Coder-30B-A3B-Instruct.

DFlash Architecture

πŸ“Š Training Data & Efficiency

Qwen3-Coder-30B-A3B-DFlash is trained on 289K samples, composed of:

Despite being trained on significantly less data, DFlash already outperforms EAGLE-3 in inference acceleration. In comparison, lmsys/SGLang-EAGLE3-Qwen3-Coder-30B-A3B-Instruct-SpecForge is trained on the open-perfect-blend dataset with 1.4M samples, nearly 5Γ— more data than DFlash.

This result highlights the training efficiency and scalability of DFlash, and suggests that further scaling the training data can unlock even greater acceleration gains.

πŸš€ Quick Start

SGLang

DFlash is now supported on SGLang. And vLLM integration is currently in progress.

Installation

uv pip install "git+https://github.com/sgl-project/sglang.git@refs/pull/16818/head#subdirectory=python"

Inference

python -m sglang.launch_server \
    --model-path Qwen/Qwen3-Coder-30B-A3B-Instruct \
    --speculative-algorithm DFLASH \
    --speculative-draft-model-path z-lab/Qwen3-Coder-30B-A3B-DFlash \
    --tp-size 1 \
    --dtype bfloat16 \
    --attention-backend fa3 \
    --mem-fraction-static 0.75 \
    --trust-remote-code

Transformers

Installation

pip install transformers==4.57.3 torch==2.9.0 accelerate

Inference

from transformers import AutoModel, AutoModelForCausalLM, AutoTokenizer

model = AutoModel.from_pretrained(
    "z-lab/Qwen3-Coder-30B-A3B-DFlash", 
    trust_remote_code=True, 
    dtype="auto", 
    device_map="cuda:0"
).eval()

target = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen3-Coder-30B-A3B-Instruct", 
    dtype="auto", 
    device_map="cuda:0"
).eval()

tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-Coder-30B-A3B-Instruct")
prompt = "Please provide a Python implementation of the Bubble Sort algorithm."
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
    enable_thinking=False
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generate_ids = model.spec_generate(
    input_ids=model_inputs["input_ids"], 
    max_new_tokens=2048, 
    temperature=0.0, 
    target=target, 
    stop_token_ids=[tokenizer.eos_token_id]
)

print(tokenizer.decode(generate_ids[0], skip_special_tokens=True))

Evaluation

DFlash consistently achieves high acceptance lengths and speedups across different concurrency levels. DFlash achieves similar acceptance length for both the bfloat16 target model and its FP8 variant Qwen3-Coder-30B-A3B-Instruct-FP8. All experiments are conducted using SGLang on a single B200 GPU.

We use a block size of 16 during speculation. You can specify different block size during inference by passing --speculative-num-draft-tokens arguments when launch the server.

HumanEval

Batch Size Method Output Throughput (tokens/s) Acceptance Length Speedup vs. AR
1 Autoregressive 229 1.00 1.00Γ—
1 DFlash 802 8.09 3.5Γ—
4 Autoregressive 686 1.00 1.00Γ—
4 DFlash 2078 8.09 3.0Γ—
8 Autoregressive 1068 1.00 1.00Γ—
8 DFlash 3442 8.09 3.2Γ—
16 Autoregressive 1681 1.00 1.00Γ—
16 DFlash 5429 8.09 3.2Γ—
32 Autoregressive 2713 1.00 1.00Γ—
32 DFlash 8314 8.09 3.1Γ—

MBPP

Batch Size Method Output Throughput (tokens/s) Acceptance Length Speedup vs. AR
1 Autoregressive 228 1.00 1.00Γ—
1 DFlash 720 7.23 3.2Γ—
4 Autoregressive 682 1.00 1.00Γ—
4 DFlash 2052 7.23 3.0Γ—
8 Autoregressive 1057 1.00 1.00Γ—
8 DFlash 3360 7.23 3.2Γ—
16 Autoregressive 1697 1.00 1.00Γ—
16 DFlash 5522 7.23 3.3Γ—
32 Autoregressive 2735 1.00 1.00Γ—
32 DFlash 8538 7.23 3.1Γ—

LiveCodeBench

Batch Size Method Output Throughput (tokens/s) Acceptance Length Speedup vs. AR
1 Autoregressive 220 1.00 1.00Γ—
1 DFlash 569 6.42 2.6Γ—
4 Autoregressive 681 1.00 1.00Γ—
4 DFlash 1621 6.42 2.4Γ—
8 Autoregressive 1112 1.00 1.00Γ—
8 DFlash 2554 6.42 2.3Γ—
16 Autoregressive 1733 1.00 1.00Γ—
16 DFlash 4160 6.42 2.4Γ—
32 Autoregressive 2823 1.00 1.00Γ—
32 DFlash 6401 6.42 2.3Γ—

Acknowledgement

We are grateful to Yotta Labs for their compute support in training this draft model.

Citation

If you find DFlash useful for your research or applications, please cite our project.

@misc{chen2026dflash,
  title         = {DFlash: Block Diffusion for Flash Speculative Decoding},
  author        = {Chen, Jian and Liang, Yesheng and Liu, Zhijian},
  year          = {2026},
  eprint        = {2602.06036},
  archivePrefix = {arXiv},
  primaryClass  = {cs.CL},
  url           = {https://arxiv.org/abs/2602.06036}
}
Downloads last month
1,085
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including z-lab/Qwen3-Coder-30B-A3B-DFlash

Paper for z-lab/Qwen3-Coder-30B-A3B-DFlash