Enhance dataset card for ReasonMap-Plus: Add paper, links, usage, abstract, citation

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +97 -1
README.md CHANGED
@@ -1,3 +1,99 @@
1
  ---
2
  license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ task_categories:
4
+ - image-text-to-text
5
+ tags:
6
+ - multimodal
7
+ - visual-question-answering
8
+ - spatial-reasoning
9
+ - reinforcement-learning
10
+ - transit-maps
11
+ language:
12
+ - en
13
+ ---
14
+
15
+ # ReasonMap-Plus Dataset
16
+
17
+ This repository hosts the `ReasonMap-Plus` dataset, an extended dataset introduced in the paper [RewardMap: Tackling Sparse Rewards in Fine-grained Visual Reasoning via Multi-Stage Reinforcement Learning](https://huggingface.co/papers/2510.02240).
18
+
19
+ ## Paper Abstract
20
+ Fine-grained visual reasoning remains a core challenge for multimodal large language models (MLLMs). The recently introduced ReasonMap highlights this gap by showing that even advanced MLLMs struggle with spatial reasoning in structured and information-rich settings such as transit maps, a task of clear practical and scientific importance. However, standard reinforcement learning (RL) on such tasks is impeded by sparse rewards and unstable optimization. To address this, we first construct ReasonMap-Plus, an extended dataset that introduces dense reward signals through Visual Question Answering (VQA) tasks, enabling effective cold-start training of fine-grained visual understanding skills. Next, we propose RewardMap, a multi-stage RL framework designed to improve both visual understanding and reasoning capabilities of MLLMs. RewardMap incorporates two key designs. First, we introduce a difficulty-aware reward design that incorporates detail rewards, directly tackling the sparse rewards while providing richer supervision. Second, we propose a multi-stage RL scheme that bootstraps training from simple perception to complex reasoning tasks, offering a more effective cold-start strategy than conventional Supervised Fine-Tuning (SFT). Experiments on ReasonMap and ReasonMap-Plus demonstrate that each component of RewardMap contributes to consistent performance gains, while their combination yields the best results. Moreover, models trained with RewardMap achieve an average improvement of 3.47% across 6 benchmarks spanning spatial reasoning, fine-grained visual reasoning, and general tasks beyond transit maps, underscoring enhanced visual understanding and reasoning capabilities.
21
+
22
+ ## Dataset Overview
23
+ `ReasonMap-Plus` addresses the core challenge of fine-grained visual reasoning for multimodal large language models (MLLMs). It extends the original `ReasonMap` dataset by introducing dense reward signals through Visual Question Answering (VQA) tasks, enabling effective cold-start training of fine-grained visual understanding skills. This dataset is crucial for the `RewardMap` framework, which aims to improve both visual understanding and reasoning capabilities of MLLMs in structured and information-rich settings like transit maps.
24
+
25
+ The dataset includes `ReasonMap-Plus` for evaluation and `ReasonMap-Train` for `RewardMap` training.
26
+
27
+ ## Links
28
+ - **Project Page:** [https://fscdc.github.io/RewardMap](https://fscdc.github.io/RewardMap)
29
+ - **Code Repository:** [https://github.com/fscdc/RewardMap](https://github.com/fscdc/RewardMap)
30
+
31
+ <p align="center">
32
+ <img src="https://github.com/fscdc/RewardMap/raw/main/assets/rewardmap.svg" width = "95%" alt="RewardMap Framework Overview" align=center />
33
+ </p>
34
+
35
+ ## Sample Usage
36
+
37
+ To get started with the RewardMap project and utilize the ReasonMap-Plus dataset, follow the steps below.
38
+
39
+ ### 1. Install dependencies
40
+
41
+ If you face any issues with the installation, please feel free to open an issue. We will try our best to help you.
42
+
43
+ ```bash
44
+ pip install -r requirements.txt
45
+ ```
46
+
47
+ ### 2. Download the dataset
48
+
49
+ <p align="center">
50
+ <img src="https://github.com/fscdc/RewardMap/raw/main/assets/overview_dataset.svg" width = "95%" alt="Dataset Overview" align=center />
51
+ </p>
52
+
53
+ You can download `ReasonMap-Plus` for evaluation and `ReasonMap-Train` for RewardMap Training from HuggingFace or by running the following command:
54
+
55
+ ```bash
56
+ python utils/download_dataset.py
57
+ ```
58
+
59
+ Then, put the data under the folder `data`.
60
+
61
+ ### 3. Data Format Example
62
+
63
+ The data will be transferred into a format like:
64
+
65
+ ```json
66
+ {
67
+ "conversations": [
68
+ {
69
+ "from": "human",
70
+ "value": "<image> Please solve the multiple choice problem and put your answer (one of ABCD) in one \"\\boxed{}\". According to the subway map, how many intermediate stops are there between Danube Station and lbn Battuta Station (except for this two stops)? \
71
+ A) 8 \
72
+ B) 1 \
73
+ C) 25 \
74
+ D) 12 \
75
+ "
76
+ },
77
+ {
78
+ "from": "gpt",
79
+ "value": "B"
80
+ }
81
+ ],
82
+ "images": [
83
+ "./maps/united_arab_emirates/dubai.png"
84
+ ]
85
+ },
86
+ ```
87
+
88
+ ## Citation
89
+
90
+ If you find this paper useful in your research, please consider citing our paper:
91
+
92
+ ```bibtex
93
+ @article{feng2025rewardmap,
94
+ title={RewardMap: Tackling Sparse Rewards in Fine-grained Visual Reasoning via Multi-Stage Reinforcement Learning},
95
+ author={Feng, Sicheng and Tuo, Kaiwen and Wang, Song and Kong, Lingdong and Zhu, Jianke and Wang, Huan},
96
+ journal={arXiv preprint arXiv:2510.02240},
97
+ year={2025}
98
+ }
99
+ ```