Text Generation
Transformers
Safetensors
PyTorch
English
llama
llama-3
DAT
robust
adversarial
conversational
text-generation-inference

DAT - Distributional Adversarial Training

arXiv GitHub

DAT utilizes continuous adversarial training on diffusion-based adversarial examples to close the gap between empirical and population-robust risk. We fine-tune meta-llama/Meta-Llama-3-8B-Instruct.

This model is NOT using adversarial training! This is an ablation/baseline using just the diffusion data to fine-tune.

For further information, consult our paper https://arxiv.org/abs/2602.15238 or repository https://github.com/ASSELab/DAT

Citation

@misc{hu2026closingdistributiongapadversarial,
      title={Closing the Distribution Gap in Adversarial Training for LLMs}, 
      author={Chengzhi Hu and Jonas Dornbusch and David Lüdke and Stephan Günnemann and Leo Schwinn},
      year={2026},
      eprint={2602.15238},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2602.15238}, 
}
Downloads last month
35
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ASSELab/Diffusion-Llama-3-8B-Instruct

Finetuned
(1042)
this model
Quantizations
2 models

Datasets used to train ASSELab/Diffusion-Llama-3-8B-Instruct

Collection including ASSELab/Diffusion-Llama-3-8B-Instruct

Papers for ASSELab/Diffusion-Llama-3-8B-Instruct