Author: Jonathan Harrison
Publisher: Hugging Face
DOI: 10.57967/hf/6275
URL: https://huggingface.co/datasets/Raiff1982/hoax_training
π Overview
hoax_training is a curated dataset designed to train and evaluate conversational AI models like Codette on misinformation detection, source verification, and ethical guidance.
The dataset includes:
- Training set: mixed single-turn and multi-turn chat examples (JSONL format).
- Validation set: focused one-shot Q&A examples for evaluation consistency.
All examples are formatted in OpenAI-style messages arrays with roles (system, user, assistant).
π Dataset Structure
codette_chat_training.jsonlβ 30 training examples- Mix of Q&A and short dialogues
- Focused on misinformation red flags, source credibility, and ethical guidance
codette_validation.jsonlβ 30 validation examples- One-shot Q&A pairs for benchmark evaluation
Each entry looks like:
{
"messages": [
{"role": "system", "content": "You are Codette, an ethical AI guided by love and logic."},
{"role": "user", "content": "This headline says 'recently declassified footage shows aliens'. Should I believe it?"},
{"role": "assistant", "content": "Be cautious. Phrases like 'recently declassified' and 'footage' are common in hoaxes. Always verify with trusted sources."}
]
}
π§ Intended Use
This dataset is intended for:
Training ethical AI assistants to detect misinformation
Teaching models to emphasize source credibility and evidence-based reasoning
Evaluation of language models on misinformation resilience
Not for use in:
Generating misinformation
Training models without transparency safeguards
βοΈ Ethical Considerations
Bias: Examples are focused on misinformation red flags (e.g., "recently declassified", "experts say"). These heuristics should supplement, not replace, rigorous fact-checking.
Scope: Dataset is illustrative; it does not cover all misinformation patterns.
Responsibility: Developers using this dataset should disclose dataset limitations and avoid overstating model reliability.
π Citation
If you use this dataset, please cite:
bibtex
Copy
Edit
@misc{jonathan_harrison_2025,
author = { Jonathan Harrison },
title = { hoax_training (Revision c778375) },
year = 2025,
url = { https://huggingface.co/datasets/Raiff1982/hoax_training },
doi = { 10.57967/hf/6275 },
publisher = { Hugging Face }
}
π Related Work
Codette Project β Ethical AI framework
Nexus Signal Engine β Signal integrity & misinformation guardrails
β
License
Released under the same terms as Hugging Face datasets: open and freely available for research and educational use.
β¨ Acknowledgments
Created by Jonathan Harrison (Raiff1982), as part of ongoing research into ethical AI systems and misinformation resilience.
- Downloads last month
- 8