MuLabPKU commited on
Commit
1b83bd4
Β·
verified Β·
1 Parent(s): 867571c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +295 -36
README.md CHANGED
@@ -1,36 +1,295 @@
1
- ---
2
- license: cc-by-sa-4.0
3
- dataset_info:
4
- features:
5
- - name: id
6
- dtype: string
7
- - name: source
8
- dtype: string
9
- - name: task
10
- dtype: string
11
- - name: type
12
- dtype: string
13
- - name: instruction
14
- dtype: string
15
- - name: question
16
- dtype: string
17
- - name: options
18
- dtype: string
19
- - name: answer
20
- dtype: string
21
- - name: context
22
- dtype: string
23
- - name: evidence
24
- dtype: string
25
- splits:
26
- - name: test
27
- num_bytes: 1966003072
28
- num_examples: 1934
29
- download_size: 660289122
30
- dataset_size: 1966003072
31
- configs:
32
- - config_name: default
33
- data_files:
34
- - split: test
35
- path: data/test-*
36
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ dataset_info:
4
+ features:
5
+ - name: id
6
+ dtype: string
7
+ - name: source
8
+ dtype: string
9
+ - name: task
10
+ dtype: string
11
+ - name: type
12
+ dtype: string
13
+ - name: instruction
14
+ dtype: string
15
+ - name: question
16
+ dtype: string
17
+ - name: options
18
+ dtype: string
19
+ - name: answer
20
+ dtype: string
21
+ - name: context
22
+ dtype: string
23
+ - name: evidence
24
+ dtype: string
25
+ splits:
26
+ - name: test
27
+ num_bytes: 1966003072
28
+ num_examples: 1934
29
+ download_size: 660289122
30
+ dataset_size: 1966003072
31
+ configs:
32
+ - config_name: default
33
+ data_files:
34
+ - split: test
35
+ path: data/test-*
36
+ ---
37
+
38
+ <div align="center">
39
+
40
+ <h1><b>LooGLE v2</b></h1>
41
+
42
+ **The official repository of "LooGLE v2: Are LLMs Ready for Real World Long Dependency Challenges?"**
43
+
44
+ **NeurIPS DB Track 2025**
45
+
46
+ <div>
47
+ <a href="https://huggingface.co/datasets/MulabPKU/LooGLE-v2">
48
+ <img src="https://img.shields.io/badge/πŸ€—-Dataset-blue" alt="Dataset">
49
+ </a>
50
+ <a href="https://mulabpku.github.io/LooGLE-v2/">
51
+ <img src="https://img.shields.io/badge/🌐-Website-green" alt="Website">
52
+ </a>
53
+ <a href="https://arxiv.org/abs/2510.22548">
54
+ <img src="https://img.shields.io/badge/πŸ“„-Paper-red" alt="Paper">
55
+ </a>
56
+ <a href="https://opensource.org/licenses/MIT">
57
+ <img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License">
58
+ </a>
59
+ </div>
60
+
61
+
62
+
63
+ </div>
64
+
65
+ ---
66
+
67
+ ## πŸ“‹ Overview
68
+
69
+ LooGLE v2 is a comprehensive benchmark designed to evaluate large language models on their ability to understand and process long-context documents with complex dependencies. The benchmark covers diverse domains including **Finance**, **Law**, **Code**, and **Game**.
70
+
71
+ ---
72
+
73
+ ## πŸš€ Quick Start
74
+
75
+ ### πŸ“¦ Installation
76
+
77
+ ```bash
78
+ # Create environment with Python 3.10
79
+ conda create -n loogle-v2 python=3.10
80
+ conda activate loogle-v2
81
+
82
+ # Install dependencies
83
+ pip install -r requirements.txt
84
+
85
+ # Install Flash Attention
86
+ pip install flash-attn==2.6.3 --no-build-isolation
87
+
88
+ # Or you can download flash_attn-2.6.3-cp310-cp310-linux_x86_64.whl
89
+ pip install flash_attn-2.6.3-cp310-cp310-linux_x86_64.whl
90
+ ```
91
+
92
+ ---
93
+
94
+ ## πŸ“Š Dataset
95
+
96
+ Download the LooGLE v2 dataset from Hugging Face:
97
+
98
+ ```bash
99
+ git clone https://huggingface.co/datasets/MuLabPKU/LooGLE-v2 ./datasets/LooGLE-v2
100
+ # Or use the Hugging Face CLI to download:
101
+ hf download MuLabPKU/LooGLE-v2 --repo-type dataset --local-dir ./datasets/LooGLE-v2
102
+ ```
103
+
104
+
105
+ ---
106
+
107
+ ## πŸ› οΈ Usage
108
+
109
+ ### βš™οΈ Configuration
110
+
111
+ **vLLM server (for `predict.py`):**
112
+ ```bash
113
+ python -m vllm.entrypoints.openai.api_server \
114
+ --model path/to/your/model \
115
+ --port 8000 \
116
+ --max-model-len 131072
117
+ ```
118
+
119
+ **Model entry (`config/models.jsonl`, shared by both scripts):**
120
+ ```json
121
+ {
122
+ "name": "your-model-name",
123
+ "model": "path/to/model",
124
+ "max_len": 131072,
125
+ "base_url": "http://localhost:8000/v1",
126
+ "api_key": "your-api-key"
127
+ }
128
+ ```
129
+
130
+ Transformers mode (`predict_transformers.py`) does not need a server; it still reuses `name/model/max_len` from this config. Ensure `base_url` matches your vLLM port when using the server route.
131
+
132
+ ### πŸ”Ž Pre-compute RAG Contexts (optional)
133
+
134
+ If you plan to run `--use_rag`, first generate `context_rag` with the preprocessor:
135
+
136
+ ```bash
137
+ python rag_preprocess.py \
138
+ --input_path ./datasets/LooGLE-v2 \
139
+ --split test \
140
+ --output_path ./datasets/LooGLE-v2/test_rag.jsonl \
141
+ --embedding_model THUDM/LongCite-glm4-9b \
142
+ --devices 0,1
143
+ ```
144
+
145
+ For multi-turn refinement (using a generator model to iteratively improve retrieval queries):
146
+
147
+ ```bash
148
+ python rag_preprocess.py \
149
+ --input_path ./datasets/LooGLE-v2 \
150
+ --split test \
151
+ --output_path ./datasets/LooGLE-v2/test_rag_multi.jsonl \
152
+ --embedding_model THUDM/LongCite-glm4-9b \
153
+ --generator_model meta-llama/Llama-3.1-8B \
154
+ --multi_turn --devices 0,1
155
+ ```
156
+
157
+ ### 🎯 Running Predictions
158
+
159
+ #### Option A: vLLM server (`predict.py`)
160
+
161
+ ```bash
162
+ python predict.py \
163
+ --model your-model-name \
164
+ --data_dir ./datasets/LooGLE-v2 \
165
+ --save_dir ./results \
166
+ --max_new_tokens 512
167
+ ```
168
+
169
+ #### Option B: Transformers local (`predict_transformers.py`)
170
+
171
+ ```bash
172
+ python predict_transformers.py \
173
+ --model your-model-name \
174
+ --data_dir ./datasets/LooGLE-v2 \
175
+ --save_dir ./results \
176
+ --max_new_tokens 512
177
+ ```
178
+
179
+ Optional prompting flags (both scripts):
180
+ - `--use_cot` for Chain-of-Thought
181
+ - `--use_rag --rag_topk <k> --rag_context <path>` to inject precomputed `context_rag` (default file: `./datasets/LooGLE-v2/test_rag.jsonl`)
182
+
183
+ <details>
184
+ <summary><b>πŸ“ Core parameters (both options)</b></summary>
185
+
186
+ | Flag | Purpose |
187
+ |------|---------|
188
+ | `--model` | Must match `config/models.jsonl` name |
189
+ | `--data_dir` | Dataset path (jsonl or HF) |
190
+ | `--save_dir` | Output directory |
191
+ | `--with_context` | 1/0 to include original context |
192
+ | `--n_proc` | Parallel processes |
193
+ | `--max_new_tokens` | Generation length |
194
+ | `--use_cot` | Enable Chain-of-Thought |
195
+ | `--use_rag` | Use retrieved context |
196
+ | `--rag_topk` | How many retrieved chunks to keep |
197
+ | `--rag_context` | Path to `id + context_rag` jsonl |
198
+
199
+ </details>
200
+
201
+ <details>
202
+ <summary><b>πŸ–₯️ Transformers-only flags</b></summary>
203
+
204
+ | Flag | Purpose |
205
+ |------|---------|
206
+ | `--device` | Target device (cuda/cpu, auto by default) |
207
+ | `--load_in_8bit` | 8-bit quantization (needs bitsandbytes) |
208
+ | `--load_in_4bit` | 4-bit quantization (needs bitsandbytes) |
209
+ | `--torch_dtype` | Weight dtype: float16/bfloat16/float32 |
210
+
211
+ > πŸ’‘ Install `bitsandbytes` to enable quantization: `pip install bitsandbytes`
212
+
213
+ </details>
214
+
215
+ ### πŸ“ˆ Evaluation
216
+
217
+ After prediction, evaluate the results:
218
+
219
+ ```bash
220
+ python evaluate.py --input_path ./results/your-model-name.jsonl
221
+ ```
222
+
223
+ This outputs per-task accuracy for each domain and overall accuracy.
224
+
225
+ For batch evaluation (e.g., multiple runs with CoT/RAG or no-context variants):
226
+
227
+ ```bash
228
+ python evaluate.py --input_path ./results --batch --output_json ./results/summary.json
229
+ ```
230
+
231
+ This scans a folder for `.jsonl` files, reports each file’s accuracy, and optionally saves a summary.
232
+
233
+ ---
234
+
235
+ ## πŸ“ Project Structure
236
+
237
+ ```
238
+ LooGLE-v2/
239
+ β”œβ”€β”€ src/
240
+ β”‚ β”œβ”€β”€ answer_extractor.py # Answer extraction logic
241
+ β”‚ β”œβ”€β”€ evaluator.py # Evaluation metrics
242
+ β”‚ β”œβ”€β”€ llm_client.py # LLM client implementations
243
+ β”‚ β”œβ”€β”€ data_loader.py # Data loading utilities
244
+ β”‚ └── utils.py # Common utilities
245
+ β”œβ”€β”€ config/
246
+ β”‚ └── models.jsonl # Model configurations
247
+ β”œβ”€β”€ predict.py # Prediction script (vLLM server)
248
+ β”œβ”€β”€ predict_transformers.py # Prediction script (direct transformers)
249
+ β”œβ”€β”€ rag_preprocess.py # RAG context preprocessing
250
+ β”œβ”€β”€ evaluate.py # Evaluation script
251
+ └── requirements.txt # Dependencies
252
+ ```
253
+
254
+ ---
255
+
256
+
257
+ ## πŸ“„ Results Format
258
+
259
+ Prediction outputs are saved in JSONL format:
260
+
261
+ ```json
262
+ {
263
+ "id": "sample_id",
264
+ "source": "Finance",
265
+ "task": "Metric Calculation",
266
+ "type": "question_type",
267
+ "correct_answer": "123.45",
268
+ "pred_answer": "123.40",
269
+ "response": "The correct answer is 123.40",
270
+ "judge": true
271
+ }
272
+ ```
273
+
274
+ ---
275
+
276
+ ## πŸ“– Citation
277
+
278
+ If you use LooGLE v2 in your research, please cite:
279
+
280
+ ```bibtex
281
+ @article{he2025loogle,
282
+ title={LooGLE v2: Are LLMs Ready for Real World Long Dependency Challenges?},
283
+ author={He, Ziyuan and Wang, Yuxuan and Li, Jiaqi and Liang, Kexin and Zhang, Muhan},
284
+ journal={arXiv preprint arXiv:2510.22548},
285
+ year={2025}
286
+ }
287
+ ```
288
+
289
+ ---
290
+
291
+ ## πŸ“œ License
292
+
293
+ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
294
+
295
+ ---