Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
License:
File size: 5,984 Bytes
55b6cbc 57a703b d87d231 57a703b 55b6cbc d87d231 55b6cbc 57a703b 55b6cbc 57a703b 55b6cbc 57a703b 55b6cbc 57a703b 55b6cbc 57a703b 55b6cbc d87d231 55b6cbc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 |
---
license: apache-2.0
task_categories:
- text-classification
language:
- en
tags:
- finance
- earnings-calls
- evasion-detection
- nlp
pretty_name: EvasionBench
size_categories:
- 10K<n<100K
---
# EvasionBench
<p align="center">
<a href="https://iiiiqiiii.github.io/EvasionBench"><img src="https://img.shields.io/badge/Project-Page-blue?style=for-the-badge" alt="Project Page"></a>
<a href="https://huggingface.co/FutureMa/Eva-4B-V2"><img src="https://img.shields.io/badge/🤗-Model-yellow?style=for-the-badge" alt="Model"></a>
<a href="https://github.com/IIIIQIIII/EvasionBench"><img src="https://img.shields.io/badge/GitHub-Repo-black?style=for-the-badge&logo=github" alt="GitHub"></a>
<a href="https://colab.research.google.com/github/IIIIQIIII/EvasionBench/blob/main/scripts/eva4b_inference.ipynb"><img src="https://img.shields.io/badge/Colab-Quick_Start-F9AB00?style=for-the-badge&logo=googlecolab" alt="Open In Colab"></a>
<a href="https://arxiv.org/abs/2601.09142"><img src="https://img.shields.io/badge/arXiv-Paper-red?style=for-the-badge" alt="Paper"></a>
</p>
EvasionBench is a benchmark dataset for detecting **evasive answers** in earnings call Q&A sessions. The task is to classify how directly corporate management addresses questions from financial analysts.
## Dataset Description
- **Repository:** [https://github.com/IIIIQIIII/EvasionBench](https://github.com/IIIIQIIII/EvasionBench)
- **Paper:** [https://arxiv.org/abs/2601.09142](https://arxiv.org/abs/2601.09142)
- **Point of Contact:** [GitHub Issues](https://github.com/IIIIQIIII/EvasionBench/issues)
### Dataset Summary
This dataset contains 16,726 question-answer pairs from earnings call transcripts, each labeled with one of three evasion levels. The labels were generated using the Eva-4B-V2 model, a fine-tuned classifier specifically trained for financial discourse evasion detection.
### Supported Tasks
- **Text Classification**: Classify the directness of management responses to analyst questions.
### Languages
English
## Dataset Structure
### Data Fields
| Field | Type | Description |
|-------|------|-------------|
| `uid` | string | Unique identifier for each sample |
| `question` | string | Analyst's question during the earnings call |
| `answer` | string | Management's response to the question |
| `eva4b_label` | string | Evasion label: `direct`, `intermediate`, or `fully_evasive` |
### Label Definitions
| Label | Definition | Description |
|-------|------------|-------------|
| `direct` | The core question is directly and explicitly answered | Clear figures, "Yes/No" stance, or direct explanations |
| `intermediate` | The response provides related context but sidesteps the specific core | Hedging, providing a range instead of a point, or answering adjacent topics |
| `fully_evasive` | The question is ignored, explicitly refused, or the response is entirely off-topic | Explicit refusal, total redirection, or irrelevant responses |
### Data Statistics
| Metric | Value |
|--------|-------|
| Total Samples | 16,726 |
| Direct | 8,749 (52.3%) |
| Intermediate | 7,359 (44.0%) |
| Fully Evasive | 618 (3.7%) |
### Example
```python
{
"uid": "4addbff893b81f64131fdc712d7a6d9a",
"question": "What is the expected margin for Q4?",
"answer": "We expect it to be 32%.",
"eva4b_label": "direct"
}
```
## Usage
### Loading the Dataset
```python
from datasets import load_dataset
dataset = load_dataset("FutureMa/EvasionBench")
```
### Loading from Parquet
```python
import pandas as pd
df = pd.read_parquet("evasionbench_17k_eva4b_labels_dedup.parquet")
```
### Quick Start: Inference with Eva-4B-V2
````python
from datasets import load_dataset
from transformers import pipeline
# Load dataset
dataset = load_dataset("FutureMa/EvasionBench")
# Load model using text-generation pipeline
pipe = pipeline("text-generation", model="FutureMa/Eva-4B-V2", device_map="auto")
# Get a sample
sample = dataset["train"][0]
question = sample["question"]
answer = sample["answer"]
# Prepare prompt
prompt = f"""You are a financial analyst. Your task is to Detect Evasive Answers in Financial Q&A
Question: {question}
Answer: {answer}
Response format:
```json
{{"label": "direct|intermediate|fully_evasive"}}
```
Answer in ```json content, no other text"""
# Run inference
result = pipe(prompt, max_new_tokens=64, do_sample=False)
print(result[0]["generated_text"])
````
For a complete inference example with batch processing and evaluation, see our [Colab notebook](https://colab.research.google.com/github/IIIIQIIII/EvasionBench/blob/main/scripts/eva4b_inference.ipynb).
## Dataset Creation
### Source Data
The question-answer pairs are derived from publicly available earnings call transcripts.
### Annotation Process
Labels were generated using **Eva-4B-V2**, a 4B parameter model fine-tuned specifically for evasion detection in financial discourse. Eva-4B-V2 achieves 84.9% Macro-F1 on the EvasionBench evaluation set, outperforming frontier LLMs including Claude Opus 4.5 and Gemini 3 Flash.
## Considerations for Using the Data
### Social Impact
This dataset can be used to:
- Improve transparency in corporate communications
- Assist financial analysts in identifying evasive responses
- Support research in financial NLP and discourse analysis
### Limitations
- Labels are model-generated (Eva-4B-V2) rather than human-annotated
- The dataset reflects the distribution of evasion patterns in the source data
- Performance may vary across different industries or time periods
## Citation
If you use this dataset, please cite:
```bibtex
@misc{ma2026evasionbenchlargescalebenchmarkdetecting,
title={EvasionBench: A Large-Scale Benchmark for Detecting Managerial Evasion in Earnings Call Q&A},
author={Shijian Ma and Yan Lin and Yi Yang},
year={2026},
eprint={2601.09142},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2601.09142}
}
```
## License
Apache 2.0
|