Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
License:
Upload folder using huggingface_hub
Browse files
README.md
CHANGED
|
@@ -16,6 +16,14 @@ size_categories:
|
|
| 16 |
|
| 17 |
# EvasionBench
|
| 18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
EvasionBench is a benchmark dataset for detecting **evasive answers** in earnings call Q&A sessions. The task is to classify how directly corporate management addresses questions from financial analysts.
|
| 20 |
|
| 21 |
## Dataset Description
|
|
@@ -26,7 +34,7 @@ EvasionBench is a benchmark dataset for detecting **evasive answers** in earning
|
|
| 26 |
|
| 27 |
### Dataset Summary
|
| 28 |
|
| 29 |
-
This dataset contains 16,726 question-answer pairs from earnings call transcripts, each labeled with one of three evasion levels. The labels were generated using the Eva-4B model, a fine-tuned classifier specifically trained for financial discourse evasion detection.
|
| 30 |
|
| 31 |
### Supported Tasks
|
| 32 |
|
|
@@ -82,7 +90,7 @@ English
|
|
| 82 |
```python
|
| 83 |
from datasets import load_dataset
|
| 84 |
|
| 85 |
-
dataset = load_dataset("
|
| 86 |
```
|
| 87 |
|
| 88 |
### Loading from Parquet
|
|
@@ -93,6 +101,43 @@ import pandas as pd
|
|
| 93 |
df = pd.read_parquet("evasionbench_17k_eva4b_labels_dedup.parquet")
|
| 94 |
```
|
| 95 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 96 |
## Dataset Creation
|
| 97 |
|
| 98 |
### Source Data
|
|
@@ -101,7 +146,7 @@ The question-answer pairs are derived from publicly available earnings call tran
|
|
| 101 |
|
| 102 |
### Annotation Process
|
| 103 |
|
| 104 |
-
Labels were generated using **Eva-4B**, a 4B parameter model fine-tuned specifically for evasion detection in financial discourse. Eva-4B achieves 84.9% Macro-F1 on the EvasionBench evaluation set, outperforming frontier LLMs including Claude Opus 4.5 and Gemini 3 Flash.
|
| 105 |
|
| 106 |
## Considerations for Using the Data
|
| 107 |
|
|
@@ -114,7 +159,7 @@ This dataset can be used to:
|
|
| 114 |
|
| 115 |
### Limitations
|
| 116 |
|
| 117 |
-
- Labels are model-generated (Eva-4B) rather than human-annotated
|
| 118 |
- The dataset reflects the distribution of evasion patterns in the source data
|
| 119 |
- Performance may vary across different industries or time periods
|
| 120 |
|
|
|
|
| 16 |
|
| 17 |
# EvasionBench
|
| 18 |
|
| 19 |
+
<p align="center">
|
| 20 |
+
<a href="https://iiiiqiiii.github.io/EvasionBench"><img src="https://img.shields.io/badge/Project-Page-blue?style=for-the-badge" alt="Project Page"></a>
|
| 21 |
+
<a href="https://huggingface.co/FutureMa/Eva-4B-V2"><img src="https://img.shields.io/badge/🤗-Model-yellow?style=for-the-badge" alt="Model"></a>
|
| 22 |
+
<a href="https://github.com/IIIIQIIII/EvasionBench"><img src="https://img.shields.io/badge/GitHub-Repo-black?style=for-the-badge&logo=github" alt="GitHub"></a>
|
| 23 |
+
<a href="https://colab.research.google.com/github/IIIIQIIII/EvasionBench/blob/main/scripts/eva4b_inference.ipynb"><img src="https://img.shields.io/badge/Colab-Quick_Start-F9AB00?style=for-the-badge&logo=googlecolab" alt="Open In Colab"></a>
|
| 24 |
+
<a href="https://arxiv.org/abs/2602.xxxxx"><img src="https://img.shields.io/badge/arXiv-Paper-red?style=for-the-badge" alt="Paper"></a>
|
| 25 |
+
</p>
|
| 26 |
+
|
| 27 |
EvasionBench is a benchmark dataset for detecting **evasive answers** in earnings call Q&A sessions. The task is to classify how directly corporate management addresses questions from financial analysts.
|
| 28 |
|
| 29 |
## Dataset Description
|
|
|
|
| 34 |
|
| 35 |
### Dataset Summary
|
| 36 |
|
| 37 |
+
This dataset contains 16,726 question-answer pairs from earnings call transcripts, each labeled with one of three evasion levels. The labels were generated using the Eva-4B-V2 model, a fine-tuned classifier specifically trained for financial discourse evasion detection.
|
| 38 |
|
| 39 |
### Supported Tasks
|
| 40 |
|
|
|
|
| 90 |
```python
|
| 91 |
from datasets import load_dataset
|
| 92 |
|
| 93 |
+
dataset = load_dataset("FutureMa/EvasionBench")
|
| 94 |
```
|
| 95 |
|
| 96 |
### Loading from Parquet
|
|
|
|
| 101 |
df = pd.read_parquet("evasionbench_17k_eva4b_labels_dedup.parquet")
|
| 102 |
```
|
| 103 |
|
| 104 |
+
### Quick Start: Inference with Eva-4B-V2
|
| 105 |
+
|
| 106 |
+
````python
|
| 107 |
+
from datasets import load_dataset
|
| 108 |
+
from transformers import pipeline
|
| 109 |
+
|
| 110 |
+
# Load dataset
|
| 111 |
+
dataset = load_dataset("FutureMa/EvasionBench")
|
| 112 |
+
|
| 113 |
+
# Load model using text-generation pipeline
|
| 114 |
+
pipe = pipeline("text-generation", model="FutureMa/Eva-4B-V2", device_map="auto")
|
| 115 |
+
|
| 116 |
+
# Get a sample
|
| 117 |
+
sample = dataset["train"][0]
|
| 118 |
+
question = sample["question"]
|
| 119 |
+
answer = sample["answer"]
|
| 120 |
+
|
| 121 |
+
# Prepare prompt
|
| 122 |
+
prompt = f"""You are a financial analyst. Your task is to Detect Evasive Answers in Financial Q&A
|
| 123 |
+
|
| 124 |
+
Question: {question}
|
| 125 |
+
Answer: {answer}
|
| 126 |
+
|
| 127 |
+
Response format:
|
| 128 |
+
```json
|
| 129 |
+
{{"label": "direct|intermediate|fully_evasive"}}
|
| 130 |
+
```
|
| 131 |
+
|
| 132 |
+
Answer in ```json content, no other text"""
|
| 133 |
+
|
| 134 |
+
# Run inference
|
| 135 |
+
result = pipe(prompt, max_new_tokens=64, do_sample=False)
|
| 136 |
+
print(result[0]["generated_text"])
|
| 137 |
+
````
|
| 138 |
+
|
| 139 |
+
For a complete inference example with batch processing and evaluation, see our [Colab notebook](https://colab.research.google.com/github/IIIIQIIII/EvasionBench/blob/main/scripts/eva4b_inference.ipynb).
|
| 140 |
+
|
| 141 |
## Dataset Creation
|
| 142 |
|
| 143 |
### Source Data
|
|
|
|
| 146 |
|
| 147 |
### Annotation Process
|
| 148 |
|
| 149 |
+
Labels were generated using **Eva-4B-V2**, a 4B parameter model fine-tuned specifically for evasion detection in financial discourse. Eva-4B-V2 achieves 84.9% Macro-F1 on the EvasionBench evaluation set, outperforming frontier LLMs including Claude Opus 4.5 and Gemini 3 Flash.
|
| 150 |
|
| 151 |
## Considerations for Using the Data
|
| 152 |
|
|
|
|
| 159 |
|
| 160 |
### Limitations
|
| 161 |
|
| 162 |
+
- Labels are model-generated (Eva-4B-V2) rather than human-annotated
|
| 163 |
- The dataset reflects the distribution of evasion patterns in the source data
|
| 164 |
- Performance may vary across different industries or time periods
|
| 165 |
|