Preference datasets trl-lib/hh-rlhf-helpful-base Viewer • Updated Jan 8, 2025 • 46.2k • 635 • 3 trl-lib/lm-human-preferences-descriptiveness Viewer • Updated Jan 8, 2025 • 6.26k • 40 • 1 trl-lib/lm-human-preferences-sentiment Viewer • Updated Jan 8, 2025 • 6.26k • 36 trl-lib/rlaif-v Viewer • Updated Jan 8, 2025 • 83.1k • 520 • 3
Prompt-completion datasets trl-lib/tldr Viewer • Updated Jan 8, 2025 • 130k • 3.44k • 30 trl-lib/OpenMathReasoning Viewer • Updated Apr 26, 2025 • 3.2M • 627
Unpaired preference datasets trl-lib/ultrafeedback-gpt-3.5-turbo-helpfulness Viewer • Updated Jan 8, 2025 • 16.6k • 162 • 4 trl-lib/kto-mix-14k Viewer • Updated Mar 25, 2024 • 15k • 143 • 9
Online-DPO trl-lib/pythia-1b-deduped-tldr-online-dpo 1B • Updated Aug 2, 2024 • 8 trl-lib/pythia-1b-deduped-tldr-sft 1B • Updated Aug 2, 2024 • 258 trl-lib/pythia-6.9b-deduped-tldr-online-dpo 7B • Updated Aug 2, 2024 • 2 trl-lib/pythia-2.8b-deduped-tldr-sft Updated Aug 2, 2024 • 2
Stepwise supervision datasets trl-lib/math_shepherd Viewer • Updated Jan 8, 2025 • 445k • 4.31k • 12 trl-lib/prm800k Viewer • Updated Jan 8, 2025 • 41.2k • 621 • 3
Prompt-only datasets trl-lib/ultrafeedback-prompt Viewer • Updated Jan 8, 2025 • 39.8k • 405 • 9 trl-lib/DeepMath-103K Viewer • Updated Nov 14, 2025 • 103k • 4.35k • 11
Comparing DPO with IPO and KTO A collection of chat models to explore the differences between three alignment techniques: DPO, IPO, and KTO. teknium/OpenHermes-2.5-Mistral-7B Text Generation • Updated Feb 19, 2024 • 88.6k • • 898 Intel/orca_dpo_pairs Viewer • Updated Nov 29, 2023 • 12.9k • 1.88k • 321 trl-lib/OpenHermes-2-Mistral-7B-ipo-beta-0.1-steps-200 Updated Dec 20, 2023 • 3 trl-lib/OpenHermes-2-Mistral-7B-ipo-beta-0.2-steps-200 Updated Dec 20, 2023 • 2
Preference datasets trl-lib/hh-rlhf-helpful-base Viewer • Updated Jan 8, 2025 • 46.2k • 635 • 3 trl-lib/lm-human-preferences-descriptiveness Viewer • Updated Jan 8, 2025 • 6.26k • 40 • 1 trl-lib/lm-human-preferences-sentiment Viewer • Updated Jan 8, 2025 • 6.26k • 36 trl-lib/rlaif-v Viewer • Updated Jan 8, 2025 • 83.1k • 520 • 3
Stepwise supervision datasets trl-lib/math_shepherd Viewer • Updated Jan 8, 2025 • 445k • 4.31k • 12 trl-lib/prm800k Viewer • Updated Jan 8, 2025 • 41.2k • 621 • 3
Prompt-completion datasets trl-lib/tldr Viewer • Updated Jan 8, 2025 • 130k • 3.44k • 30 trl-lib/OpenMathReasoning Viewer • Updated Apr 26, 2025 • 3.2M • 627
Prompt-only datasets trl-lib/ultrafeedback-prompt Viewer • Updated Jan 8, 2025 • 39.8k • 405 • 9 trl-lib/DeepMath-103K Viewer • Updated Nov 14, 2025 • 103k • 4.35k • 11
Unpaired preference datasets trl-lib/ultrafeedback-gpt-3.5-turbo-helpfulness Viewer • Updated Jan 8, 2025 • 16.6k • 162 • 4 trl-lib/kto-mix-14k Viewer • Updated Mar 25, 2024 • 15k • 143 • 9
Comparing DPO with IPO and KTO A collection of chat models to explore the differences between three alignment techniques: DPO, IPO, and KTO. teknium/OpenHermes-2.5-Mistral-7B Text Generation • Updated Feb 19, 2024 • 88.6k • • 898 Intel/orca_dpo_pairs Viewer • Updated Nov 29, 2023 • 12.9k • 1.88k • 321 trl-lib/OpenHermes-2-Mistral-7B-ipo-beta-0.1-steps-200 Updated Dec 20, 2023 • 3 trl-lib/OpenHermes-2-Mistral-7B-ipo-beta-0.2-steps-200 Updated Dec 20, 2023 • 2
Online-DPO trl-lib/pythia-1b-deduped-tldr-online-dpo 1B • Updated Aug 2, 2024 • 8 trl-lib/pythia-1b-deduped-tldr-sft 1B • Updated Aug 2, 2024 • 258 trl-lib/pythia-6.9b-deduped-tldr-online-dpo 7B • Updated Aug 2, 2024 • 2 trl-lib/pythia-2.8b-deduped-tldr-sft Updated Aug 2, 2024 • 2