OSL Gymnastics Localization (Action Spotting)
This repository provides a Gymnastics action spotting / localization dataset in an OpenSportsLab / SoccerNet-style format.
The dataset is organized by split (train/, valid/, test/) with video clips and corresponding localization annotations in JSON.
π Task
- Task type:
action_spotting(a.k.a. temporal action localization / event spotting) - Sport domain: Artistic Gymnastics (VT, UB, BB, FX)
- Annotation granularity: clip-relative timestamps in milliseconds (
position_ms) - Label format: single-label events (one label per event)
This dataset supports fine-grained temporal localization of gymnastics elements such as:
- Vault phases (
VT_0,VT_1,VT_2,VT_3) - Uneven Bars elements (
UB_circles_start,UB_dismounts_end, etc.) - Balance Beam skills (
BB_turns_start,BB_flight_salto_end, etc.) - Floor Exercise elements (
FX_back_salto_start,FX_leap_jump_hop_end, etc.)
π Main branch structure
Current structure on main:
main/
βββ annotations-localization-train.json
βββ annotations-localization-valid.json
βββ annotations-localization-test.json
βββ train/
β βββ <clip>.mp4
β βββ ...
βββ valid/
β βββ <clip>.mp4
β βββ ...
βββ test/
βββ <clip>.mp4
βββ ...
The three folders
train/,valid/,test/contain short gymnastics video clips (.mp4).The three JSON files contain the localization labels for the corresponding split.
Clip filenames follow the pattern:
<YouTubeID>_E_<start>_<end>.mp4Example:
0LtLS9wROrk_E_000147_000152.mp4
π§Ύ Annotation format
Each annotation file follows a SoccerNet-like schema.
Top-level keys:
version: format version (e.g.,"2.0")task:"action_spotting"dataset_name: dataset identifierlabels: list of valid event classes under a givenhead_namedata: list of items (each item corresponds to one clip)
data[] item fields
Each item contains:
id: stable item identifierinputs: list containing a video descriptorevents: list of labeled events in that clipmetadata: extra information such asfps,width,height, segment boundaries, etc.
Example (simplified):
{
"id": "Gymnastics_0LtLS9wROrk_E_000147_000152",
"inputs": [
{
"type": "video",
"path": "test/0LtLS9wROrk_E_000147_000152.mp4",
"fps": 29.97
}
],
"events": [
{
"head": "gymnastics_action",
"label": "VT_0",
"position_ms": "1201",
"comment": "round-off, flic-flac on, stretched salto backward with 1 turn off"
}
]
}
β±οΈ Timestamp β video position relationship (IMPORTANT)
For each event:
position_msis clip-relative time in milliseconds.- It is computed from the clip-relative frame index using:
position_ms = round(frame / fps * 1000)
So:
position_ms = 0corresponds to the first frame of the clip.position_ms = 4240means the event happens around 4.240 seconds after the clip start.
If you need the approximate frame index back:
frame β round(position_ms / 1000 * fps)
π·οΈ Labels
Labels are stored under:
labels.<head_name>.labels
where <head_name> is typically:
gymnastics_action
Example classes:
BB_turns_startBB_turns_endFX_back_salto_startFX_back_salto_endUB_circles_startUB_circles_endUB_transition_flight_startUB_dismounts_endVT_0VT_1VT_2VT_3
Each event is single-label, meaning one action per timestamp.
π§© Clip naming convention
Clips are derived from longer competition videos.
Pattern:
<YouTubeID>_E_<start>_<end>
Where:
<YouTubeID>: original competition video ID_E: event version marker<start>/<end>: start and end time in seconds within the original long video
Example:
0LtLS9wROrk_E_000147_000152
This means:
Original video:
0LtLS9wROrkClip covers seconds
147to152The clip file is stored as:
test/0LtLS9wROrk_E_000147_000152.mp4
π§° Notes
Paths in
inputs[].pathare relative paths pointing to the split folder:train/<clip>.mp4valid/<clip>.mp4test/<clip>.mp4
position_msvalues are clip-relative, not global video timestamps.Clips may contain padding before or after annotated actions.
The repository includes
.gitattributesfor Git/LFS handling of large video files.
β Quick sanity check
Pick one entry in annotations-localization-test.json:
Open the clip video located at:
test/<clip>.mp4Convert
position_msto seconds:seconds = position_ms / 1000Jump to that time in the video.
You should observe the corresponding gymnastics element near that timestamp.
π Data Source & Attribution
The gymnastics clips and raw annotations in this dataset are derived from the tennis data released in the official repository of the paper:
Spotting Temporally Precise, Fine-Grained Events in Video (ECCV 2022)
James Hong, Haotian Zhang, MichaΓ«l Gharbi, Matthew Fisher, Kayvon Fatahalian
Source repository (gymnastics data):
hhttps://github.com/jhong93/spot/tree/main/data/finegym
If you use this dataset, please cite the original paper:
@inproceedings{precisespotting_eccv22,
author={Hong, James and Zhang, Haotian and Gharbi, Micha\"{e}l and Fisher, Matthew and Fatahalian, Kayvon},
title={Spotting Temporally Precise, Fine-Grained Events in Video},
booktitle={ECCV},
year={2022}
}
- Downloads last month
- 18