VideoTG-R1: Boosting Video Temporal Grounding via Curriculum Reinforcement Learning on Reflected Boundary Annotations
Lu Dong1,3, Haiyu Zhang2,3, Han Lin4,3, Ziang Yan5,3, Xiangyu Zeng6,3, Hongjie Zhang3, Yifei Huang3, Yi Wang3, Zhen-Hua Ling1, Limin Wang6,3, Yali Wang7,3â€
1University of Science and Technology of China
2Beihang University
3Shanghai Artificial Intelligence Laboratory
4Shanghai Jiao Tong University
5Zhejiang University
6Nanjing University
7Chinese Academy of Sciences
†Corresponding author
VideoTG-R1 is a multi-agent system for data-efficient video temporal grounding. It contains three modules: 1) Boundary Reflection Agent filters the training data by identifying and discarding partially annotated samples; 2) Difficulty Estimation Agent estimates the difficulty of each sample via zero-shot evaluation; and 3) Curriculum RL strategy dynamically masks the videos of hard-to-ground samples according to the training steps, to ease the training difficulty of hard-to-ground samples. VideoTG-R1 achieves state-of-the-art performance on Charades-STA and ActivityNet-Captions. Moreover, with only 10% of the training dataset, our method outperforms models trained on the full dataset under both GRPO and SFT paradigms.
Abstract
Video temporal grounding (VTG) aims to locate precise segments in videos based on language queries, which is a fundamental challenge in video understanding. While recent Multimodal Large Language Models (MLLMs) have shown promise in tackling VTG through reinforcement learning (RL), they overlook the challenges arising from both the quality and difficulty of training samples. (1) Partially annotated samples. Many samples contain relevant segments beyond the annotated interval, introducing ambiguous supervision. (2) Hard-to-ground samples. Samples with poor zero-shot performance produce consistently low and indistinguishable rewards during RL training, exhibiting no clear preference among multiple outputs and thus hindering learning efficiency. To address these challenges, we propose VideoTG-R1, a novel curriculum RL framework with reflected boundary annotations, enabling data-efficient training. Specifically, we propose a Boundary Reflection Agent that utilizes MLLMs to predict query-relevant timestamps outside the annotated intervals, allowing us to identify and filter out partially annotated samples, thereby reducing ambiguity. Furthermore, we introduce a Difficulty Estimation Agent to assess the training difficulty of each sample and design a curriculum RL strategy that dynamically masks the videos of hard-to-ground samples according to the training steps, easing the training difficulty and providing clearer preference. Experiments on the VTG and grounded VideoQA tasks demonstrate the effectiveness of our method. Remarkably, with only 10% of the training samples and 21% of the computational budget, VideoTG-R1 outperforms full-data counterparts under both group relative policy optimization (GRPO) and supervised fine-tuning (SFT). The code is available at this https URL .
Code Environment Preparation
pip install requirements.txt
Dataset Preparation
All training and evaluation datasets can be download in VideoMind's huggingface.
Evaluation
- video grounding
cd Eval
your_ckpt=xxx
dataset_name=charades # or anet
bash video_grounding.sh ${dataset_name}
- grounded qa
cd Eval
your_ckpt=xxx
dataset_name=rextime # or nextgqa
bash grounded_qa.sh ${dataset_name}
Fast Training
- You should download the intermediate_results in the huggingface first.
cd ./videotg_r1
bash grpo_train.sh
Full Training
- boundary reflection agent
- evaluate each dataset from 'qvhighlights,didemo,tacos,queryd,hirest_grounding,hirest_step,cosmo_cap,internvid_vtime'
- you can run the code with single/multiple gpus.
cd ./BRA
dataset_name=cosmo_cap
each_fold_size=22000
fold_index=0
bash bra_test.sh ${dataset_name} ${each_fold_size} ${fold_index}
- difficulty estimation agent
- evaluate each dataset from 'qvhighlights,didemo,tacos,queryd,hirest_grounding,hirest_step,cosmo_cap,internvid_vtime'
- you can run the code with single/multiple gpus.
cd ./DEA
dataset_name=cosmo_cap
each_fold_size=22000
fold_index=0
bash bra_test.sh ${dataset_name} ${each_fold_size} ${fold_index}
- curriculum grpo
cd ./videotg_r1
bash grpo_train.sh
Acknowledgement
VideoTG-R1 is built with reference to the following projects: VideoMind and VideoChat-R1. Thanks for their work!
Citation
If you find our work helpful or inspiring, please feel free to cite it.
@article{shen2024longvu,
author ={Shen, Xiaoqian and Xiong, Yunyang and Zhao, Changsheng and Wu, Lemeng and Chen, Jun and Zhu, Chenchen and Liu, Zechun and Xiao, Fanyi and Varadarajan, Balakrishnan and Bordes, Florian and Liu, Zhuang and Xu, Hu and J. Kim, Hyunwoo and Soran, Bilge and Krishnamoorthi, Raghuraman and Elhoseiny, Mohamed and Chandra, Vikas},
title = {LongVU: Spatiotemporal Adaptive Compression for Long Video-Language Understanding},
journal = {arXiv preprint arXiv:2410.17434},
year = {2024},
}


