Datasets:
annotations_creators:
- expert-generated
language_creators:
- expert-generated
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-retrieval
task_ids:
- fact-checking-retrieval
paperswithcode_id: hover
pretty_name: HoVer
dataset_info:
features:
- name: id
dtype: int32
- name: uid
dtype: string
- name: claim
dtype: string
- name: supporting_facts
list:
- name: key
dtype: string
- name: value
dtype: int32
- name: label
dtype:
class_label:
names:
'0': NOT_SUPPORTED
'1': SUPPORTED
- name: num_hops
dtype: int32
- name: hpqa_id
dtype: string
splits:
- name: train
num_bytes: 5532178
num_examples: 18171
- name: validation
num_bytes: 1299252
num_examples: 4000
- name: test
num_bytes: 927513
num_examples: 4000
download_size: 3428352
dataset_size: 7758943
Dataset Card for HoVer
Note: This is a scriptless, Parquet-based version of the HoVer dataset for seamless integration with HuggingFace
datasetslibrary. Notrust_remote_coderequired!
Table of Contents
Quick Start
from datasets import load_dataset
# Load the dataset (no trust_remote_code needed!)
dataset = load_dataset("hover-nlp/hover")
# Access splits
train = dataset["train"]
validation = dataset["validation"]
test = dataset["test"]
# Example usage
print(train[0])
# {
# 'id': 0,
# 'uid': '330ca632-e83f-4011-b11b-0d0158145036',
# 'claim': 'Skagen Painter Peder Severin Krøyer favored naturalism...',
# 'supporting_facts': [{'key': 'Kristian Zahrtmann', 'value': 0}, ...],
# 'label': 1, # 0: NOT_SUPPORTED, 1: SUPPORTED
# 'num_hops': 3,
# 'hpqa_id': '5ab7a86d5542995dae37e986'
# }
Dataset Description
- Homepage: https://hover-nlp.github.io/
- Repository: https://github.com/hover-nlp/hover
- Paper: https://arxiv.org/abs/2011.03088
- Leaderboard: https://hover-nlp.github.io/
Dataset Summary
HoVer (HOP VERification) is an open-domain, many-hop fact extraction and claim verification dataset built upon the Wikipedia corpus. The dataset contains claims that require reasoning over multiple documents (multi-hop) to verify whether they are supported or not supported by evidence.
The original 2-hop claims are adapted from question-answer pairs from HotpotQA. It was collected by a team of NLP researchers at UNC Chapel Hill and Verisk Analytics.
This version provides the dataset in Parquet format for efficient loading and compatibility with modern data processing pipelines, eliminating the need for custom loading scripts.
Supported Tasks and Leaderboards
- Fact Verification: Determine whether a claim is SUPPORTED or NOT_SUPPORTED based on evidence from Wikipedia articles
- Multi-hop Reasoning: Claims require reasoning across multiple documents (indicated by
num_hopsfield) - Evidence Retrieval: Identify relevant supporting facts from source documents
The official leaderboard is available at https://hover-nlp.github.io/
Languages
English (en)
Dataset Structure
Data Instances
A sample training set example:
{
"id": 14856,
"uid": "a0cf45ea-b5cd-4c4e-9ffa-73b39ebd78ce",
"claim": "The park at which Tivolis Koncertsal is located opened on 15 August 1843.",
"supporting_facts": [
{"key": "Tivolis Koncertsal", "value": 0},
{"key": "Tivoli Gardens", "value": 1}
],
"label": 1,
"num_hops": 2,
"hpqa_id": "5abca1a55542993a06baf937"
}
Note: In the test set, only id, uid, and claim fields contain meaningful data. The label is set to -1, num_hops to -1, hpqa_id to "None", and supporting_facts is an empty list, as these are withheld for evaluation purposes.
Data Fields
- id (
int32): Sequential identifier for the example within its split - uid (
string): Unique identifier (UUID) for the claim - claim (
string): The claim statement to be verified - supporting_facts (
list): List of evidence facts, where each fact contains:- key (
string): Title of the Wikipedia article - value (
int32): Sentence index within that article
- key (
- label (
ClassLabel): Verification label with values:0: NOT_SUPPORTED - The claim is not supported by the evidence1: SUPPORTED - The claim is supported by the evidence-1: Unknown (used in test set)
- num_hops (
int32): Number of reasoning hops required (typically 2-4 for this dataset) - hpqa_id (
string): Original HotpotQA question ID from which the claim was derived
Data Splits
| Split | Examples |
|---|---|
| Train | 18,171 |
| Validation | 4,000 |
| Test | 4,000 |
| Total | 26,171 |
The splits maintain the original distribution from the HoVer dataset.
Dataset Creation
Curation Rationale
HoVer was created to address the challenge of multi-hop fact verification, where claims require reasoning across multiple documents. The dataset was built to push the boundaries of claim verification systems beyond single-document fact-checking.
Source Data
The dataset is built upon Wikipedia as the knowledge source. Claims are adapted from HotpotQA question-answer pairs and modified to create verification statements that require multi-hop reasoning.
Annotations
The dataset was annotated by expert annotators who identified supporting facts across multiple Wikipedia articles and determined whether claims were supported or not supported by the evidence.
Additional Information
Licensing Information
This dataset is licensed under the MIT License.
Citation Information
@inproceedings{jiang2020hover,
title={{HoVer}: A Dataset for Many-Hop Fact Extraction And Claim Verification},
author={Yichen Jiang and Shikha Bordia and Zheng Zhong and Charles Dognin and Maneesh Singh and Mohit Bansal},
booktitle={Findings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year={2020}
}
Contributions
Thanks to @abhishekkrthakur for adding the original dataset and @vincentkoc for creating this Parquet version.