Dream 7B: Diffusion Large Language Models
Paper • 2508.15487 • Published
A fine-tuned DREAM 7B masked diffusion language model trained with the ReverseThought objective.
Given a question and its answer, the model learns to produce the step-by-step reasoning chain that bridges the question to the answer. This trains the model to generate coherent chain-of-thought reasoning via DREAM's masked diffusion process.
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("WilhelmH/Bridge-7b-Diffusion", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("WilhelmH/Bridge-7b-Diffusion", trust_remote_code=True)
This is a masked diffusion language model (not autoregressive). It uses bidirectional attention and generates text by iteratively denoising masked tokens. See the DREAM paper for details.
Base model
Dream-org/Dream-v0-Instruct-7B