xiaoyan001 commited on
Commit
0aada65
·
verified ·
1 Parent(s): de01525

Upload folder using huggingface_hub

Browse files
LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Meituan
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in
13
+ all copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- license: cc-by-4.0
3
 
4
  language:
5
  - zh
@@ -11,20 +11,21 @@ tags:
11
  - audio-question-answering
12
 
13
  ---
14
-
15
- <h1> UNO-Bench: A Unified Benchmark for Exploring the Compositional Law Between Uni-modal and Omni-modal in OmniModels</h1>
16
  <p align="center" width="100%">
17
- <img src="assets/uno-bench-title.jpg" width="80%" height="100%">
18
  </p>
19
 
20
- <font size=2><div align='center' >
21
- [[📑 Github Paper](./assets/UNO-Bench.pdf)]
22
- [🌐 ArXiv Paper](https://arxiv.org/abs/2510.18915)
23
- [📊 UNO-Bench Data](https://huggingface.co/datasets/meituan-longcat/UNO-Bench)
24
- </div></font>
 
 
25
 
26
  ## 👀 UNO-Bench Overview
27
- Multimodal Large Language Models are advancing from uni-modal to omni-modal understanding, integrating visual, audio, and language modalities. However, the relationship between uni-modal and omni-modal performance needs clarification, necessitating thorough evaluation to enhance omni model intelligence. We introduce **UNO-Bench**, a novel benchmark that evaluates both uni-modal and omni-modal capabilities. It includes 3730 human curated samples, with 98% cross-modality solvability, across 44 task types, and features an innovative multi-step open-ended question type for assessing complex reasoning. Additionally, we propose a general scoring model supporting 6 question types for automatic evaluation with 95% accuracy. Experimental results reveal a **Compositional Law** between omni-modal and uni-modal performance, with omni-modal capability acting as a bottleneck for weak models but promoting synergy in strong models.
28
 
29
  <div>
30
  <p align="center">
@@ -32,7 +33,7 @@ Multimodal Large Language Models are advancing from uni-modal to omni-modal unde
32
  </p>
33
  </div>
34
 
35
- The MultiModal Benchmarks compare image (I), audio (A), video (V), and text (T) modalities, reporting omni-modal solution accuracy (Acc.) and percentage (Solvable). Source origin affects data contamination, with private sources being safer. QA types are multi-choice (MC) and multi-step open-ended (MO), in English (EN) and Chinese (CH). UNO-Bench features 1250 omni-modal (-omni) and 2480 uni-modal (-uni) samples.
36
 
37
  <div>
38
  <p align="center">
@@ -40,12 +41,20 @@ The MultiModal Benchmarks compare image (I), audio (A), video (V), and text (T)
40
  </p>
41
  </div>
42
 
 
 
 
 
 
 
 
 
43
 
44
  ## 📊 Dataset Construction
45
 
46
  **Material Collection**
47
 
48
- Our materials feature three key characteristics: **a. Diverse Sources**—primarily real-world photos and videos from crowdsourcing, supplemented by copyright-free websites and high-quality public datasets . **b. Rich and Diverse Topics**—spanning society, culture, art, life, literature, and science. **c. Live-Recorded Audio**—dialogue recorded by over 20 human speakers, ensuring rich audio features that mirror real-world vocal diversity.
49
 
50
  **QA Annotation**
51
 
@@ -67,47 +76,19 @@ Regarding automated data compression, we propose a cluster-guided stratified sam
67
 
68
  ## 📍 Dataset Examples
69
 
70
- UNO-Bench is consists of multi-step open-ended question samples and multi-choice question samples:
71
-
72
- ---
73
-
74
- **Question:** Given that Xiaoming has 5 different colors, he will use these colors to color the four regions in the figure. If Xiaoming colors Region I first, there are 5 ways to color it. Then he colors Regions II and IV, and finally Region III. Based on the information above, the requirements in audio, and the image, answer the following questions:
75
-
76
- 1. When Regions II and IV are colored with the same color, how many coloring methods are there?
77
- 2. When Regions II and IV are colored with different colors, how many coloring methods are there?
78
- 3. In summary, what is the total number of coloring methods?"
79
-
80
- [audio1.mp3](https://github.com/user-attachments/files/23122352/audio1.mp3)(Audio Content: In the provided image, any two regions that share a common border cannot be the same color, and each region must be colored with only one color)
81
 
82
  <p align="center">
83
- <img width="239" height="192" alt="image2" src="./assets/qa_example_1.png" />
84
  </p>
85
 
86
- **Answer:**
87
-
88
- 1. 80 (4 points)
89
-
90
- 2. 180 (4 points)
91
-
92
- 3. 260 (2 points)
93
-
94
  ---
95
 
96
- **Question**: 视频展示了我最近在玩的一款游戏,玩家通过手指在屏幕上划动,来切开各种飞来的水果,如西瓜、凤梨、猕猴桃、草莓、香蕉等,切中不同的水果会有不同的得分,如没切到,则不得分。已知切一个猕猴桃是2分,一个草莓3分,一个杨桃20分,一个橙子6分。音频中有其他水果的得分规则。请根据以上所有信息、视频和音频回答我,视频中这把游戏一共拿了多少分?请从以下选项中,选出一个正确答案:
97
-
98
- A. 30分
99
-
100
- **B. 35分**
101
-
102
- C. 37分
103
-
104
- D. 40分
105
-
106
- [audio2.mp3](https://github.com/user-attachments/files/23122214/audio1.mp3)(音频内容:一个西瓜10分,一个香蕉2分,一个青苹果1分,一个柠檬3分,一个椰子5分,一个红苹果3分,一个桃子1分)
107
-
108
- [video2.mp4](https://github.com/user-attachments/assets/2baafc12-b14a-4fd1-831b-9517589a766b)
109
-
110
 
 
111
 
112
  ## 🔍 Results
113
 
@@ -116,44 +97,50 @@ Our main evaluation reveals a clear performance hierarchy where proprietary mode
116
  <img src="./assets/cross-modal-results.png" width="60%" height="100%" />
117
  </p>
118
 
119
- **Finding 1. 📍Gemini-2.5-Pro demonstrates human-like perception in omni-modal understanding but lags in reasoning ability.** The model's performance is only 8.3% lower than human experts, indicating comparable intelligence. Interestingly, humans excel more in reasoning (81.3%) than perception (74.3%), contrasting with the model's strengths.
120
 
121
  <p align="center">
122
  <img src="./assets/gemini-2.5-vs-human.png" width="60%" height="100%" />
123
  </p>
124
 
125
- **Finding 2. 📍Compositional Law: Omni-modal capability effectiveness correlates with the product of individual modality performances following a power-law.** The observed omni-modal scores align closely with the product of uni-modal scores, as shown by the fitted law (dashed line), achieving an impressive $R^2=0.9759$. The convex, accelerating curve illustrates the power-law synergy.
 
126
 
127
  $$
128
  P_{\text{Omni}} = C \cdot (P_{\text{Audio}} \times P_{\text{Visual}})^{\alpha} + b
129
  $$
 
 
 
 
 
130
  <p align="center">
131
  <img src="./assets/compositional-law.png" width="60%" height="100%" />
132
  </p>
133
 
 
 
134
  ## 📌 Checklist
135
 
136
  - **Data**
137
- - ✅ Paper
138
- - ✅ Dataset Examples
139
- - 🚧 Benchmark Leaderboard
140
- - 🚧 UNO-Bench Dataset
141
  - **Code**
142
- - 🚧 Evaluation Toolkit
143
- - 🚧 Model Weights and Configurations
144
 
145
  ## 🖊️ Citation
146
 
147
  If you find our work helpful for your research, please consider citing our work.
148
  ```bash
149
  @misc{chen2025unobench,
150
- title={UNO-Bench: A Unified Benchmark for Exploring the Compositional Law Between Uni-modal and Omni-modal in OmniModels},
151
- author={Chen Chen and ZeYang Hu and Fengjiao Chen and Liya Ma and Jiaxing Liu and Xiaoyu Li and Xuezhi Cao},
152
  year={2025},
153
  eprint={2510.18915},
154
  archivePrefix={arXiv},
155
  primaryClass={cs.CL},
156
- url={arxiv},
157
  }
158
  ```
159
 
 
1
  ---
2
+ license: mit
3
 
4
  language:
5
  - zh
 
11
  - audio-question-answering
12
 
13
  ---
14
+ <h1> UNO-Bench: A Unified Benchmark for Exploring the Compositional Law Between Uni-modal and Omni-modal in Omni Models</h1>
 
15
  <p align="center" width="100%">
16
+ <img src="assets/uno-bench-title.jpeg" width="80%" height="100%">
17
  </p>
18
 
19
+ <div align="center" style="line-height: 1;">
20
+ <a target="_blank" href='https://meituan-longcat.github.io/UNO-Bench'><img src='https://img.shields.io/badge/Project-Page-green'></a>
21
+ <a target="_blank" href='https://github.com/meituan-longcat/UNO-Bench/blob/main/UNO-Bench.pdf'><img src='https://img.shields.io/badge/Technique-Report-red'></a>
22
+ <a target="_blank" href='https://huggingface.co/datasets/meituan-longcat/UNO-Bench'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Data-blue'></a>
23
+ <a href='./'><img src='https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53'></a>
24
+ </div>
25
+
26
 
27
  ## 👀 UNO-Bench Overview
28
+ Multimodal Large Languages models have been progressing from uni-modal understanding toward unifying visual, audio and language modalities, collectively termed omni models. However, the correlation between uni-modal and omni-modal remains unclear, which requires comprehensive evaluation to drive omni model's intelligence evolution. In this work, we introduce a novel, high-quality, and **UN**ified **O**mni model benchmark, **UNO-Bench**. This benchmark is designed to effectively evaluate both **UN**i-modal and **O**mni-modal capabilities under a unified ability taxonomy, spanning 44 task types and 5 modality combinations. It includes 1250 human curated samples for omni-modal with 98% cross-modality solvability, and 2480 enhanced uni-modal samples. The human-generated dataset is well-suited to real-world scenarios, particularly within the Chinese context, whereas the automatically compressed dataset offers a 90% increase in speed and maintains 98% consistency across 18 public benchmarks. In addition to traditional multi-choice questions, we propose an innovative multi-step open-ended question format to assess complex reasoning. A general scoring model is incorporated, supporting 6 question types for automated evaluation with 95% accuracy. Experimental result shows the **Compositional Law** between omni-modal and uni-modal performance and the omni-modal capability manifests as a bottleneck effect on weak models, while exhibiting synergistic promotion on strong models.
29
 
30
  <div>
31
  <p align="center">
 
33
  </p>
34
  </div>
35
 
36
+ <!-- The MultiModal Benchmarks compare image (I), audio (A), video (V), and text (T) modalities, reporting omni-modal solution accuracy (Acc.) and percentage (Solvable). Source origin affects data contamination, with private sources being safer. QA types are multi-choice (MC) and multi-step open-ended (MO), in English (EN) and Chinese (CH). UNO-Bench features 1250 omni-modal (-omni) and 2480 uni-modal (-uni) samples. -->
37
 
38
  <div>
39
  <p align="center">
 
41
  </p>
42
  </div>
43
 
44
+ **Main Contributions**
45
+
46
+ - 🌟 **Propose UNO-Bench, the first unified omni model benchmark**, efficiently assessing uni-modal and omni-modal understanding. It verifies the compositional law between these capabilities, acting as a bottleneck for weaker models and enhancing stronger ones.
47
+
48
+ - 🌟 **Establish a high-quality dataset pipeline** with human-centric processes and automated compression. UNO-Bench contains 1250 omni-modal samples with 98% cross-modality solvability and 2480 uni-modal samples across 44 task types and 5 modality combinations. The dataset excels in real-world scenarios, especially in China, and offers a 90% speed increase while maintaining 98% consistency across 18 benchmarks.
49
+
50
+ - 🌟 **Introduce Multi-Step Open-Ended Questions (MO)** for complex reasoning evaluation, providing realistic results. A General Scoring Model supports 6 question types with 95% accuracy on OOD models and benchmarks.
51
+
52
 
53
  ## 📊 Dataset Construction
54
 
55
  **Material Collection**
56
 
57
+ Our materials feature three key characteristics: **a. Diverse Sources**—primarily real-world photos and videos from crowdsourcing, supplemented by copyright-free websites and high-quality public datasets. **b. Rich and Diverse Topics**—spanning society, culture, art, life, literature, and science. **c. Live-Recorded Audio**—dialogue recorded by over 20 human speakers, ensuring rich audio features that mirror real-world vocal diversity.
58
 
59
  **QA Annotation**
60
 
 
76
 
77
  ## 📍 Dataset Examples
78
 
79
+ The capabilities of UNO-Bench are systematically categorized into two primary dimensions: Perception and Reasoning. Please click [link](https://huggingface.co/datasets/meituan-longcat/UNO-Bench) to download UNO-Bench. Below shows some examples from UNO-Bench:
 
 
 
 
 
 
 
 
 
 
80
 
81
  <p align="center">
82
+ <img alt="image2" src="./assets/omni-perception-cases.png" />
83
  </p>
84
 
 
 
 
 
 
 
 
 
85
  ---
86
 
87
+ <p align="center">
88
+ <img alt="image2" src="./assets/omni-reasoning-cases.png" />
89
+ </p>
 
 
 
 
 
 
 
 
 
 
 
90
 
91
+ For more samples, please refer to the project [page](https://meituan-longcat.github.io/UNO-Bench).
92
 
93
  ## 🔍 Results
94
 
 
97
  <img src="./assets/cross-modal-results.png" width="60%" height="100%" />
98
  </p>
99
 
100
+ **Finding 1. ��Perception Ability and Reasoning Ability:** Compared to human experts, Gemini-2.5-Pro exhibits similar performance in perception, but falls significantly behind in reasoning. Meanwhile, humans are more proficient in reasoning as opposed to perception (81.3% compared to 74.3%).
101
 
102
  <p align="center">
103
  <img src="./assets/gemini-2.5-vs-human.png" width="60%" height="100%" />
104
  </p>
105
 
106
+ **Finding 2. 📍Compositional Law: Omni-modal capability effectiveness correlates with the product of individual modality performances following a power-law.**
107
+ Motivated by our experimental observations and through rigorous mathematical derivation, we propose the following formula to model the compositional law:
108
 
109
  $$
110
  P_{\text{Omni}} = C \cdot (P_{\text{Audio}} \times P_{\text{Visual}})^{\alpha} + b
111
  $$
112
+
113
+ This model fits our data almost perfectly, achieving a coefficient of determination ($R^2$) of $0.9759$.
114
+ - $α=2.19$ is the synergistic exponent greater than 1, explaining the transition from a "short-board effect" to an "emergent ability".
115
+ - $b=0.24$ is the baseline bias close to 0.25, reflecting the random-guess accuracy of our benchmark.
116
+ - $C=1.03$ is the scaling coefficient close to 1, indicating a harmonious and naturally scaled system.
117
  <p align="center">
118
  <img src="./assets/compositional-law.png" width="60%" height="100%" />
119
  </p>
120
 
121
+ **Finding 3. 📍Redundant Synchronized Audio-visual Video Data:** Audio-visual synchronized video data is highly redundant, making it challenging to design questions that test understanding of both audio and visual. Consequently, using standard videos for training or evaluation makes it difficult to develop models with effective modal collaboration capabilities. For samples, please visit the project [page](https://meituan-longcat.github.io/UNO-Bench).
122
+
123
  ## 📌 Checklist
124
 
125
  - **Data**
126
+ - ✅ Benchmark Leaderboard
127
+ - ✅ UNO-Bench Dataset
 
 
128
  - **Code**
129
+ - Evaluation Toolkit
130
+ - Model Weights and Configurations
131
 
132
  ## 🖊️ Citation
133
 
134
  If you find our work helpful for your research, please consider citing our work.
135
  ```bash
136
  @misc{chen2025unobench,
137
+ title={UNO-Bench: A Unified Benchmark for Exploring the Compositional Law Between Uni-modal and Omni-modal in Omni Models},
138
+ author={Chen Chen and ZeYang Hu and Fengjiao Chen and Liya Ma and Jiaxing Liu and Xiaoyu Li and Ziwen Wang and Xuezhi Cao and Xunliang Cai},
139
  year={2025},
140
  eprint={2510.18915},
141
  archivePrefix={arXiv},
142
  primaryClass={cs.CL},
143
+ url={https://arxiv.org/abs/2510.18915},
144
  }
145
  ```
146
 
assets/cross-modal-results.png CHANGED

Git LFS Details

  • SHA256: dcce070c098f52fae7c6ae6aa727e83e06e4c029bf7df386720f9865e6547c01
  • Pointer size: 131 Bytes
  • Size of remote file: 176 kB

Git LFS Details

  • SHA256: 8b6ad45a72df5967439dcf7ab36dd39bfcd84dec6210bd20437f361e28cafe07
  • Pointer size: 131 Bytes
  • Size of remote file: 148 kB
assets/data-statistics.png CHANGED

Git LFS Details

  • SHA256: d922c55c212a1f0940b9db898f7ff2a7e7fd00eade2b9f503d69d42648991111
  • Pointer size: 131 Bytes
  • Size of remote file: 124 kB

Git LFS Details

  • SHA256: 05b56e4c61e861ce77e239a75e0ad6b129945ce7d659472bb635ff251ebea378
  • Pointer size: 131 Bytes
  • Size of remote file: 104 kB
assets/omni-ability.png CHANGED

Git LFS Details

  • SHA256: bf04e887472a52b64125c34bde48f7594a142822894c08ff61de8e9ae7582b1f
  • Pointer size: 132 Bytes
  • Size of remote file: 1.91 MB

Git LFS Details

  • SHA256: ed012bbaf7ed576c7828179264c58195dda6a17962b43bca782ac2a7d967be87
  • Pointer size: 132 Bytes
  • Size of remote file: 2.13 MB
assets/omni-perception-cases.png ADDED

Git LFS Details

  • SHA256: c33785bf156cf6b17c7183daf2e292c1556931e107dea754f7a767d18775ef77
  • Pointer size: 131 Bytes
  • Size of remote file: 802 kB
assets/omni-reasoning-cases.png ADDED

Git LFS Details

  • SHA256: 99e2fa95e21548522c9373fbfad33947c47a2d35a73ed0213b0a277490177ad1
  • Pointer size: 131 Bytes
  • Size of remote file: 676 kB
assets/uno-bench-title.jpeg ADDED

Git LFS Details

  • SHA256: 95ce6c4791243450d3693fc3d2252336d6ce4a55bd38aa2da48268ebc6abab40
  • Pointer size: 131 Bytes
  • Size of remote file: 731 kB
validation.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1b361139a5b4a6aab70dcf21eeb310233ceb13429bf80d5e55db73f0936951ef
3
- size 2097855
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e2d058dc9e1578c7c620c0550e806e372c2ee7703655bd1f0f492df1173b0d1
3
+ size 2097502