drawing

Falcon-H1R-7B-FP8

This repository presents post FP8 quantized Falcon-H1R-7B-FP8 via NVIDIA Model Optimizer, enabling efficient inference while preserving the strong reasoning introduced in the paper Falcon-H1R: Pushing the Reasoning Frontiers with a Hybrid Model for Efficient Test-Time Scaling.

Falcon-H1R-7B was trained via cold-start supervised fine-tuning with long reasoning traces and further enhanced by scaling RL with GRPO. The model demonstrates outstanding performance across various benchmark evaluations, including mathematics, programming, instruction following, and general logic.

Model Description

Details

For more details on FP8 post-quantization for this model, please refer to the Falcon-H1R-FP8 technical blogpost.

For more details about the training protocol of this model, please refer to the Falcon-H1R technical blogpost and Technical Report.

Usage

Currently to use this model, you can either rely on Hugging Face transformers, vLLM or SGLang library.

Inference

Make sure to install the latest version of transformers or vLLM or SGLang.

pip install transformers
pip install mamba-ssm[causal-conv1d]

For vLLM, make sure to install latest vllm:

pip install vllm --extra-index-url https://wheels.vllm.ai/nightly
  • FP8 support (vLLM): FP8 enablement from vLLM PR #32728 has been merged: PR.
  • Memory footprint: Model weight memory drops from 14.2 GB (BF16) to 7.9 GB (FP8).
  • Throughput: Inference throughput improves from ~1.2× to up to 1.5×, depending on batch size, prompt length, and generation length, with minimal accuracy impact for this post-training FP8-quantized model.
  • Tensor parallelism (quantized models): Tensor parallel enablement from vLLM PR #33257 has been merged: PR.

Sampling Parameters

We recommend using a temperature of 0.6 and top-p as 0.95 with max new tokens up to 65536. For supported frameworks, you can adjust the repetition_penalty and presence_penalty parameters to reduce endless repetitions.

vLLM

For vLLM, simply start a server by executing the command below:

Click to expand
vllm serve tiiuae/Falcon-H1R-7B-FP8 \
  --tensor-parallel-size 1 \
  --data-parallel-size 1 \
  --reasoning-parser deepseek_r1
  --quantization modelopt

Additional flags:
  • You can reduce --max-model-len to preserve memory. Default value is 262144 which is quite large but not necessary for most scenarios.
  • For function calling, append --enable-auto-tool-choice and --tool-call-parser hermes to the vllm serve command.

vLLM client execution code:

from openai import OpenAI
import json

client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="EMPTY",
)

completion = client.chat.completions.create(
    model="tiiuae/Falcon-H1R-7B-FP8",
    messages=[
        {"role": "user", "content": "If the product of two numbers is 360 and their GCD is 6, what is their LCM?"},
    ],
    temperature=0.6,
    top_p=0.95,
    max_tokens=65536
)

msg = completion.choices[0].message

print(json.dumps({
    "reasoning": msg.reasoning_content,
    "answer": msg.content
}, indent=2))

Evaluation

Falcon-H1R achieves state of art results in reasoning benchmarks. Please refer to the Falcon-H1R-7B BF16 model card for full benchmark details. The table below highlights accuracy comparisons on selected benchmarks between the Falcon-H1R BF16 checkpoint and the Falcon-H1R FP8 post-quantized checkpoint.

Benchmark Falcon-H1R-7B BF16 Falcon-H1R-7B FP8
AIME25 83.1 82.3
LCBv5-v6 68.6 67.6
GPQA-D 61.3 61.2

Useful links

Acknowledgements

We sincerely thank the NVIDIA team — Sergio Perez, Shengliang Xu, Vadim Gimpelson, Mireille Fares, Liana Mikaelyan, Amit Kushwaha, and Adam Czekalowski — for their valuable collaboration and support in post-quantizing Falcon-H1R-7B to FP8.

Citation

If the Falcon-H1R family of reasoning models is helpful to your work, feel free to give us a cite.

@misc{falcon-h1r,
      title={Falcon-H1R: Pushing the Reasoning Frontiers with a Hybrid Model for Efficient Test-Time Scaling}, 
      author={Falcon LLM Team and Iheb Chaabane and Puneesh Khanna and Suhail Mohmad and Slim Frikha and Shi Hu and Abdalgader Abubaker and Reda Alami and Mikhail Lubinets and Mohamed El Amine Seddik and Hakim Hacid},
      year={2026},
      eprint={2601.02346},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2601.02346}, 
}
Downloads last month
3
Safetensors
Model size
8B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for tiiuae/Falcon-H1R-7B-FP8

Quantized
(16)
this model

Collection including tiiuae/Falcon-H1R-7B-FP8

Paper for tiiuae/Falcon-H1R-7B-FP8