Update README.md
Browse files
README.md
CHANGED
|
@@ -23,6 +23,14 @@ We introduce **MultiUI**, a dataset containing 7.3 million samples from 1 millio
|
|
| 23 |
|
| 24 |
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/65403d8781a8731a1c09a584/vk7yT4Y7ydBOHM6BojmlI.mp4"></video>
|
| 25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
## Model Performance
|
| 27 |
|
| 28 |

|
|
|
|
| 23 |
|
| 24 |
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/65403d8781a8731a1c09a584/vk7yT4Y7ydBOHM6BojmlI.mp4"></video>
|
| 25 |
|
| 26 |
+
## Training & Evaluation
|
| 27 |
+
|
| 28 |
+
The model training is based on the **[LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT)**.
|
| 29 |
+
|
| 30 |
+
For deployment, refer to **SGLang deployment** section in LLaVA-NeXT repo.
|
| 31 |
+
|
| 32 |
+
For benchmark evaluation, the awesome **lmms-eval** package is used. Check our repo **[MultiUI](https://github.com/neulab/multiui)** to evaluate on benchmarks mentioned in the paper.
|
| 33 |
+
|
| 34 |
## Model Performance
|
| 35 |
|
| 36 |

|