Text Generation
Transformers
Safetensors
PyTorch
qwen2
qwen
llama-3
DAT
robust
adversarial
conversational
text-generation-inference

DAT - Distributional Adversarial Training

arXiv GitHub

DAT utilizes continuous adversarial training on diffusion-based adversarial examples to close the gap between empirical and population-robust risk. We fine-tune Qwen/Qwen2.5-14B-Instruct.

This model is NOT using adversarial training! This is an ablation/baseline using just the diffusion data to fine-tune.

For further information, consult our paper https://arxiv.org/abs/2602.15238 or repository https://github.com/ASSELab/DAT

Citation

@misc{hu2026closingdistributiongapadversarial,
      title={Closing the Distribution Gap in Adversarial Training for LLMs}, 
      author={Chengzhi Hu and Jonas Dornbusch and David Lüdke and Stephan Günnemann and Leo Schwinn},
      year={2026},
      eprint={2602.15238},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2602.15238}, 
}
Downloads last month
25
Safetensors
Model size
15B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ASSELab/DAT-Qwen2.5-14B-Instruct

Base model

Qwen/Qwen2.5-14B
Finetuned
(368)
this model
Quantizations
2 models

Datasets used to train ASSELab/DAT-Qwen2.5-14B-Instruct

Collection including ASSELab/DAT-Qwen2.5-14B-Instruct

Papers for ASSELab/DAT-Qwen2.5-14B-Instruct