Upload folder using huggingface_hub
Browse files- LICENSE +21 -0
- README.md +44 -57
- assets/cross-modal-results.png +2 -2
- assets/data-statistics.png +2 -2
- assets/omni-ability.png +2 -2
- assets/omni-perception-cases.png +3 -0
- assets/omni-reasoning-cases.png +3 -0
- assets/uno-bench-title.jpeg +3 -0
- validation.parquet +2 -2
LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
MIT License
|
| 2 |
+
|
| 3 |
+
Copyright (c) 2025 Meituan
|
| 4 |
+
|
| 5 |
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
| 6 |
+
of this software and associated documentation files (the "Software"), to deal
|
| 7 |
+
in the Software without restriction, including without limitation the rights
|
| 8 |
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
| 9 |
+
copies of the Software, and to permit persons to whom the Software is
|
| 10 |
+
furnished to do so, subject to the following conditions:
|
| 11 |
+
|
| 12 |
+
The above copyright notice and this permission notice shall be included in
|
| 13 |
+
all copies or substantial portions of the Software.
|
| 14 |
+
|
| 15 |
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
| 16 |
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
| 17 |
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
| 18 |
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
| 19 |
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
| 20 |
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
| 21 |
+
SOFTWARE.
|
README.md
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
---
|
| 2 |
-
license:
|
| 3 |
|
| 4 |
language:
|
| 5 |
- zh
|
|
@@ -11,20 +11,21 @@ tags:
|
|
| 11 |
- audio-question-answering
|
| 12 |
|
| 13 |
---
|
| 14 |
-
|
| 15 |
-
<h1> UNO-Bench: A Unified Benchmark for Exploring the Compositional Law Between Uni-modal and Omni-modal in OmniModels</h1>
|
| 16 |
<p align="center" width="100%">
|
| 17 |
-
<img src="assets/uno-bench-title.
|
| 18 |
</p>
|
| 19 |
|
| 20 |
-
<
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
|
|
|
|
|
|
| 25 |
|
| 26 |
## 👀 UNO-Bench Overview
|
| 27 |
-
Multimodal Large
|
| 28 |
|
| 29 |
<div>
|
| 30 |
<p align="center">
|
|
@@ -32,7 +33,7 @@ Multimodal Large Language Models are advancing from uni-modal to omni-modal unde
|
|
| 32 |
</p>
|
| 33 |
</div>
|
| 34 |
|
| 35 |
-
The MultiModal Benchmarks compare image (I), audio (A), video (V), and text (T) modalities, reporting omni-modal solution accuracy (Acc.) and percentage (Solvable). Source origin affects data contamination, with private sources being safer. QA types are multi-choice (MC) and multi-step open-ended (MO), in English (EN) and Chinese (CH). UNO-Bench features 1250 omni-modal (-omni) and 2480 uni-modal (-uni) samples.
|
| 36 |
|
| 37 |
<div>
|
| 38 |
<p align="center">
|
|
@@ -40,12 +41,20 @@ The MultiModal Benchmarks compare image (I), audio (A), video (V), and text (T)
|
|
| 40 |
</p>
|
| 41 |
</div>
|
| 42 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
## 📊 Dataset Construction
|
| 45 |
|
| 46 |
**Material Collection**
|
| 47 |
|
| 48 |
-
Our materials feature three key characteristics: **a. Diverse Sources**—primarily real-world photos and videos from crowdsourcing, supplemented by copyright-free websites and high-quality public datasets
|
| 49 |
|
| 50 |
**QA Annotation**
|
| 51 |
|
|
@@ -67,47 +76,19 @@ Regarding automated data compression, we propose a cluster-guided stratified sam
|
|
| 67 |
|
| 68 |
## 📍 Dataset Examples
|
| 69 |
|
| 70 |
-
UNO-Bench
|
| 71 |
-
|
| 72 |
-
---
|
| 73 |
-
|
| 74 |
-
**Question:** Given that Xiaoming has 5 different colors, he will use these colors to color the four regions in the figure. If Xiaoming colors Region I first, there are 5 ways to color it. Then he colors Regions II and IV, and finally Region III. Based on the information above, the requirements in audio, and the image, answer the following questions:
|
| 75 |
-
|
| 76 |
-
1. When Regions II and IV are colored with the same color, how many coloring methods are there?
|
| 77 |
-
2. When Regions II and IV are colored with different colors, how many coloring methods are there?
|
| 78 |
-
3. In summary, what is the total number of coloring methods?"
|
| 79 |
-
|
| 80 |
-
[audio1.mp3](https://github.com/user-attachments/files/23122352/audio1.mp3)(Audio Content: In the provided image, any two regions that share a common border cannot be the same color, and each region must be colored with only one color)
|
| 81 |
|
| 82 |
<p align="center">
|
| 83 |
-
<img
|
| 84 |
</p>
|
| 85 |
|
| 86 |
-
**Answer:**
|
| 87 |
-
|
| 88 |
-
1. 80 (4 points)
|
| 89 |
-
|
| 90 |
-
2. 180 (4 points)
|
| 91 |
-
|
| 92 |
-
3. 260 (2 points)
|
| 93 |
-
|
| 94 |
---
|
| 95 |
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
**B. 35分**
|
| 101 |
-
|
| 102 |
-
C. 37分
|
| 103 |
-
|
| 104 |
-
D. 40分
|
| 105 |
-
|
| 106 |
-
[audio2.mp3](https://github.com/user-attachments/files/23122214/audio1.mp3)(音频内容:一个西瓜10分,一个香蕉2分,一个青苹果1分,一个柠檬3分,一个椰子5分,一个红苹果3分,一个桃子1分)
|
| 107 |
-
|
| 108 |
-
[video2.mp4](https://github.com/user-attachments/assets/2baafc12-b14a-4fd1-831b-9517589a766b)
|
| 109 |
-
|
| 110 |
|
|
|
|
| 111 |
|
| 112 |
## 🔍 Results
|
| 113 |
|
|
@@ -116,44 +97,50 @@ Our main evaluation reveals a clear performance hierarchy where proprietary mode
|
|
| 116 |
<img src="./assets/cross-modal-results.png" width="60%" height="100%" />
|
| 117 |
</p>
|
| 118 |
|
| 119 |
-
**Finding 1.
|
| 120 |
|
| 121 |
<p align="center">
|
| 122 |
<img src="./assets/gemini-2.5-vs-human.png" width="60%" height="100%" />
|
| 123 |
</p>
|
| 124 |
|
| 125 |
-
**Finding 2. 📍Compositional Law: Omni-modal capability effectiveness correlates with the product of individual modality performances following a power-law.**
|
|
|
|
| 126 |
|
| 127 |
$$
|
| 128 |
P_{\text{Omni}} = C \cdot (P_{\text{Audio}} \times P_{\text{Visual}})^{\alpha} + b
|
| 129 |
$$
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 130 |
<p align="center">
|
| 131 |
<img src="./assets/compositional-law.png" width="60%" height="100%" />
|
| 132 |
</p>
|
| 133 |
|
|
|
|
|
|
|
| 134 |
## 📌 Checklist
|
| 135 |
|
| 136 |
- **Data**
|
| 137 |
-
- ✅
|
| 138 |
-
- ✅ Dataset
|
| 139 |
-
- 🚧 Benchmark Leaderboard
|
| 140 |
-
- 🚧 UNO-Bench Dataset
|
| 141 |
- **Code**
|
| 142 |
-
-
|
| 143 |
-
-
|
| 144 |
|
| 145 |
## 🖊️ Citation
|
| 146 |
|
| 147 |
If you find our work helpful for your research, please consider citing our work.
|
| 148 |
```bash
|
| 149 |
@misc{chen2025unobench,
|
| 150 |
-
title={UNO-Bench: A Unified Benchmark for Exploring the Compositional Law Between Uni-modal and Omni-modal in
|
| 151 |
-
author={Chen Chen and ZeYang Hu and Fengjiao Chen and Liya Ma and Jiaxing Liu and Xiaoyu Li and Xuezhi Cao},
|
| 152 |
year={2025},
|
| 153 |
eprint={2510.18915},
|
| 154 |
archivePrefix={arXiv},
|
| 155 |
primaryClass={cs.CL},
|
| 156 |
-
url={arxiv},
|
| 157 |
}
|
| 158 |
```
|
| 159 |
|
|
|
|
| 1 |
---
|
| 2 |
+
license: mit
|
| 3 |
|
| 4 |
language:
|
| 5 |
- zh
|
|
|
|
| 11 |
- audio-question-answering
|
| 12 |
|
| 13 |
---
|
| 14 |
+
<h1> UNO-Bench: A Unified Benchmark for Exploring the Compositional Law Between Uni-modal and Omni-modal in Omni Models</h1>
|
|
|
|
| 15 |
<p align="center" width="100%">
|
| 16 |
+
<img src="assets/uno-bench-title.jpeg" width="80%" height="100%">
|
| 17 |
</p>
|
| 18 |
|
| 19 |
+
<div align="center" style="line-height: 1;">
|
| 20 |
+
<a target="_blank" href='https://meituan-longcat.github.io/UNO-Bench'><img src='https://img.shields.io/badge/Project-Page-green'></a>
|
| 21 |
+
<a target="_blank" href='https://github.com/meituan-longcat/UNO-Bench/blob/main/UNO-Bench.pdf'><img src='https://img.shields.io/badge/Technique-Report-red'></a>
|
| 22 |
+
<a target="_blank" href='https://huggingface.co/datasets/meituan-longcat/UNO-Bench'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Data-blue'></a>
|
| 23 |
+
<a href='./'><img src='https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53'></a>
|
| 24 |
+
</div>
|
| 25 |
+
|
| 26 |
|
| 27 |
## 👀 UNO-Bench Overview
|
| 28 |
+
Multimodal Large Languages models have been progressing from uni-modal understanding toward unifying visual, audio and language modalities, collectively termed omni models. However, the correlation between uni-modal and omni-modal remains unclear, which requires comprehensive evaluation to drive omni model's intelligence evolution. In this work, we introduce a novel, high-quality, and **UN**ified **O**mni model benchmark, **UNO-Bench**. This benchmark is designed to effectively evaluate both **UN**i-modal and **O**mni-modal capabilities under a unified ability taxonomy, spanning 44 task types and 5 modality combinations. It includes 1250 human curated samples for omni-modal with 98% cross-modality solvability, and 2480 enhanced uni-modal samples. The human-generated dataset is well-suited to real-world scenarios, particularly within the Chinese context, whereas the automatically compressed dataset offers a 90% increase in speed and maintains 98% consistency across 18 public benchmarks. In addition to traditional multi-choice questions, we propose an innovative multi-step open-ended question format to assess complex reasoning. A general scoring model is incorporated, supporting 6 question types for automated evaluation with 95% accuracy. Experimental result shows the **Compositional Law** between omni-modal and uni-modal performance and the omni-modal capability manifests as a bottleneck effect on weak models, while exhibiting synergistic promotion on strong models.
|
| 29 |
|
| 30 |
<div>
|
| 31 |
<p align="center">
|
|
|
|
| 33 |
</p>
|
| 34 |
</div>
|
| 35 |
|
| 36 |
+
<!-- The MultiModal Benchmarks compare image (I), audio (A), video (V), and text (T) modalities, reporting omni-modal solution accuracy (Acc.) and percentage (Solvable). Source origin affects data contamination, with private sources being safer. QA types are multi-choice (MC) and multi-step open-ended (MO), in English (EN) and Chinese (CH). UNO-Bench features 1250 omni-modal (-omni) and 2480 uni-modal (-uni) samples. -->
|
| 37 |
|
| 38 |
<div>
|
| 39 |
<p align="center">
|
|
|
|
| 41 |
</p>
|
| 42 |
</div>
|
| 43 |
|
| 44 |
+
**Main Contributions**
|
| 45 |
+
|
| 46 |
+
- 🌟 **Propose UNO-Bench, the first unified omni model benchmark**, efficiently assessing uni-modal and omni-modal understanding. It verifies the compositional law between these capabilities, acting as a bottleneck for weaker models and enhancing stronger ones.
|
| 47 |
+
|
| 48 |
+
- 🌟 **Establish a high-quality dataset pipeline** with human-centric processes and automated compression. UNO-Bench contains 1250 omni-modal samples with 98% cross-modality solvability and 2480 uni-modal samples across 44 task types and 5 modality combinations. The dataset excels in real-world scenarios, especially in China, and offers a 90% speed increase while maintaining 98% consistency across 18 benchmarks.
|
| 49 |
+
|
| 50 |
+
- 🌟 **Introduce Multi-Step Open-Ended Questions (MO)** for complex reasoning evaluation, providing realistic results. A General Scoring Model supports 6 question types with 95% accuracy on OOD models and benchmarks.
|
| 51 |
+
|
| 52 |
|
| 53 |
## 📊 Dataset Construction
|
| 54 |
|
| 55 |
**Material Collection**
|
| 56 |
|
| 57 |
+
Our materials feature three key characteristics: **a. Diverse Sources**—primarily real-world photos and videos from crowdsourcing, supplemented by copyright-free websites and high-quality public datasets. **b. Rich and Diverse Topics**—spanning society, culture, art, life, literature, and science. **c. Live-Recorded Audio**—dialogue recorded by over 20 human speakers, ensuring rich audio features that mirror real-world vocal diversity.
|
| 58 |
|
| 59 |
**QA Annotation**
|
| 60 |
|
|
|
|
| 76 |
|
| 77 |
## 📍 Dataset Examples
|
| 78 |
|
| 79 |
+
The capabilities of UNO-Bench are systematically categorized into two primary dimensions: Perception and Reasoning. Please click [link](https://huggingface.co/datasets/meituan-longcat/UNO-Bench) to download UNO-Bench. Below shows some examples from UNO-Bench:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 80 |
|
| 81 |
<p align="center">
|
| 82 |
+
<img alt="image2" src="./assets/omni-perception-cases.png" />
|
| 83 |
</p>
|
| 84 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 85 |
---
|
| 86 |
|
| 87 |
+
<p align="center">
|
| 88 |
+
<img alt="image2" src="./assets/omni-reasoning-cases.png" />
|
| 89 |
+
</p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 90 |
|
| 91 |
+
For more samples, please refer to the project [page](https://meituan-longcat.github.io/UNO-Bench).
|
| 92 |
|
| 93 |
## 🔍 Results
|
| 94 |
|
|
|
|
| 97 |
<img src="./assets/cross-modal-results.png" width="60%" height="100%" />
|
| 98 |
</p>
|
| 99 |
|
| 100 |
+
**Finding 1. ��Perception Ability and Reasoning Ability:** Compared to human experts, Gemini-2.5-Pro exhibits similar performance in perception, but falls significantly behind in reasoning. Meanwhile, humans are more proficient in reasoning as opposed to perception (81.3% compared to 74.3%).
|
| 101 |
|
| 102 |
<p align="center">
|
| 103 |
<img src="./assets/gemini-2.5-vs-human.png" width="60%" height="100%" />
|
| 104 |
</p>
|
| 105 |
|
| 106 |
+
**Finding 2. 📍Compositional Law: Omni-modal capability effectiveness correlates with the product of individual modality performances following a power-law.**
|
| 107 |
+
Motivated by our experimental observations and through rigorous mathematical derivation, we propose the following formula to model the compositional law:
|
| 108 |
|
| 109 |
$$
|
| 110 |
P_{\text{Omni}} = C \cdot (P_{\text{Audio}} \times P_{\text{Visual}})^{\alpha} + b
|
| 111 |
$$
|
| 112 |
+
|
| 113 |
+
This model fits our data almost perfectly, achieving a coefficient of determination ($R^2$) of $0.9759$.
|
| 114 |
+
- $α=2.19$ is the synergistic exponent greater than 1, explaining the transition from a "short-board effect" to an "emergent ability".
|
| 115 |
+
- $b=0.24$ is the baseline bias close to 0.25, reflecting the random-guess accuracy of our benchmark.
|
| 116 |
+
- $C=1.03$ is the scaling coefficient close to 1, indicating a harmonious and naturally scaled system.
|
| 117 |
<p align="center">
|
| 118 |
<img src="./assets/compositional-law.png" width="60%" height="100%" />
|
| 119 |
</p>
|
| 120 |
|
| 121 |
+
**Finding 3. 📍Redundant Synchronized Audio-visual Video Data:** Audio-visual synchronized video data is highly redundant, making it challenging to design questions that test understanding of both audio and visual. Consequently, using standard videos for training or evaluation makes it difficult to develop models with effective modal collaboration capabilities. For samples, please visit the project [page](https://meituan-longcat.github.io/UNO-Bench).
|
| 122 |
+
|
| 123 |
## 📌 Checklist
|
| 124 |
|
| 125 |
- **Data**
|
| 126 |
+
- ✅ Benchmark Leaderboard
|
| 127 |
+
- ✅ UNO-Bench Dataset
|
|
|
|
|
|
|
| 128 |
- **Code**
|
| 129 |
+
- □ Evaluation Toolkit
|
| 130 |
+
- □ Model Weights and Configurations
|
| 131 |
|
| 132 |
## 🖊️ Citation
|
| 133 |
|
| 134 |
If you find our work helpful for your research, please consider citing our work.
|
| 135 |
```bash
|
| 136 |
@misc{chen2025unobench,
|
| 137 |
+
title={UNO-Bench: A Unified Benchmark for Exploring the Compositional Law Between Uni-modal and Omni-modal in Omni Models},
|
| 138 |
+
author={Chen Chen and ZeYang Hu and Fengjiao Chen and Liya Ma and Jiaxing Liu and Xiaoyu Li and Ziwen Wang and Xuezhi Cao and Xunliang Cai},
|
| 139 |
year={2025},
|
| 140 |
eprint={2510.18915},
|
| 141 |
archivePrefix={arXiv},
|
| 142 |
primaryClass={cs.CL},
|
| 143 |
+
url={https://arxiv.org/abs/2510.18915},
|
| 144 |
}
|
| 145 |
```
|
| 146 |
|
assets/cross-modal-results.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
assets/data-statistics.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
assets/omni-ability.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
assets/omni-perception-cases.png
ADDED
|
Git LFS Details
|
assets/omni-reasoning-cases.png
ADDED
|
Git LFS Details
|
assets/uno-bench-title.jpeg
ADDED
|
Git LFS Details
|
validation.parquet
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8e2d058dc9e1578c7c620c0550e806e372c2ee7703655bd1f0f492df1173b0d1
|
| 3 |
+
size 2097502
|