You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

OSL Gymnastics Localization (Action Spotting)

This repository provides a Gymnastics action spotting / localization dataset in an OpenSportsLab / SoccerNet-style format.

The dataset is organized by split (train/, valid/, test/) with video clips and corresponding localization annotations in JSON.


πŸ“Œ Task

  • Task type: action_spotting (a.k.a. temporal action localization / event spotting)
  • Sport domain: Artistic Gymnastics (VT, UB, BB, FX)
  • Annotation granularity: clip-relative timestamps in milliseconds (position_ms)
  • Label format: single-label events (one label per event)

This dataset supports fine-grained temporal localization of gymnastics elements such as:

  • Vault phases (VT_0, VT_1, VT_2, VT_3)
  • Uneven Bars elements (UB_circles_start, UB_dismounts_end, etc.)
  • Balance Beam skills (BB_turns_start, BB_flight_salto_end, etc.)
  • Floor Exercise elements (FX_back_salto_start, FX_leap_jump_hop_end, etc.)

πŸ“ Main branch structure

Current structure on main:

main/
β”œβ”€β”€ annotations-localization-train.json
β”œβ”€β”€ annotations-localization-valid.json
β”œβ”€β”€ annotations-localization-test.json
β”œβ”€β”€ train/
β”‚   β”œβ”€β”€ <clip>.mp4
β”‚   └── ...
β”œβ”€β”€ valid/
β”‚   β”œβ”€β”€ <clip>.mp4
β”‚   └── ...
└── test/
    β”œβ”€β”€ <clip>.mp4
    └── ...
  • The three folders train/, valid/, test/ contain short gymnastics video clips (.mp4).

  • The three JSON files contain the localization labels for the corresponding split.

  • Clip filenames follow the pattern:

    <YouTubeID>_E_<start>_<end>.mp4
    

    Example:

    0LtLS9wROrk_E_000147_000152.mp4
    

🧾 Annotation format

Each annotation file follows a SoccerNet-like schema.

Top-level keys:

  • version: format version (e.g., "2.0")
  • task: "action_spotting"
  • dataset_name: dataset identifier
  • labels: list of valid event classes under a given head_name
  • data: list of items (each item corresponds to one clip)

data[] item fields

Each item contains:

  • id: stable item identifier
  • inputs: list containing a video descriptor
  • events: list of labeled events in that clip
  • metadata: extra information such as fps, width, height, segment boundaries, etc.

Example (simplified):

{
  "id": "Gymnastics_0LtLS9wROrk_E_000147_000152",
  "inputs": [
    {
      "type": "video",
      "path": "test/0LtLS9wROrk_E_000147_000152.mp4",
      "fps": 29.97
    }
  ],
  "events": [
    {
      "head": "gymnastics_action",
      "label": "VT_0",
      "position_ms": "1201",
      "comment": "round-off, flic-flac on, stretched salto backward with 1 turn off"
    }
  ]
}

⏱️ Timestamp ↔ video position relationship (IMPORTANT)

For each event:

  • position_ms is clip-relative time in milliseconds.
  • It is computed from the clip-relative frame index using:
position_ms = round(frame / fps * 1000)

So:

  • position_ms = 0 corresponds to the first frame of the clip.
  • position_ms = 4240 means the event happens around 4.240 seconds after the clip start.

If you need the approximate frame index back:

frame β‰ˆ round(position_ms / 1000 * fps)

🏷️ Labels

Labels are stored under:

labels.<head_name>.labels

where <head_name> is typically:

gymnastics_action

Example classes:

  • BB_turns_start
  • BB_turns_end
  • FX_back_salto_start
  • FX_back_salto_end
  • UB_circles_start
  • UB_circles_end
  • UB_transition_flight_start
  • UB_dismounts_end
  • VT_0
  • VT_1
  • VT_2
  • VT_3

Each event is single-label, meaning one action per timestamp.


🧩 Clip naming convention

Clips are derived from longer competition videos.

Pattern:

<YouTubeID>_E_<start>_<end>

Where:

  • <YouTubeID>: original competition video ID
  • _E: event version marker
  • <start> / <end>: start and end time in seconds within the original long video

Example:

0LtLS9wROrk_E_000147_000152

This means:

  • Original video: 0LtLS9wROrk

  • Clip covers seconds 147 to 152

  • The clip file is stored as:

    test/0LtLS9wROrk_E_000147_000152.mp4
    

🧰 Notes

  • Paths in inputs[].path are relative paths pointing to the split folder:

    • train/<clip>.mp4
    • valid/<clip>.mp4
    • test/<clip>.mp4
  • position_ms values are clip-relative, not global video timestamps.

  • Clips may contain padding before or after annotated actions.

  • The repository includes .gitattributes for Git/LFS handling of large video files.


βœ… Quick sanity check

Pick one entry in annotations-localization-test.json:

  1. Open the clip video located at:

    test/<clip>.mp4
    
  2. Convert position_ms to seconds:

    seconds = position_ms / 1000
    
  3. Jump to that time in the video.

  4. You should observe the corresponding gymnastics element near that timestamp.


πŸ“š Data Source & Attribution

The gymnastics clips and raw annotations in this dataset are derived from the tennis data released in the official repository of the paper:

Spotting Temporally Precise, Fine-Grained Events in Video (ECCV 2022)
James Hong, Haotian Zhang, MichaΓ«l Gharbi, Matthew Fisher, Kayvon Fatahalian

Source repository (gymnastics data):
hhttps://github.com/jhong93/spot/tree/main/data/finegym

If you use this dataset, please cite the original paper:

@inproceedings{precisespotting_eccv22,
    author={Hong, James and Zhang, Haotian and Gharbi, Micha\"{e}l and Fisher, Matthew and Fatahalian, Kayvon},
    title={Spotting Temporally Precise, Fine-Grained Events in Video},
    booktitle={ECCV},
    year={2022}
}
Downloads last month
18