VGGSound-50k / README.md
Gray1y's picture
Update README.md
0e1ccd8 verified
metadata
license: cc-by-4.0
size_categories:
  - 10K<n<100K
task_categories:
  - other
tags:
  - cross-modal
  - knowledge-distillation
  - audio-visual
  - multimodal
  - vggsound
  - feature-extraction

VGGSound-50k Preprocessed Dataset

This dataset contains preprocessed data from the VGGSound dataset, specifically processed using the VGGSound-AVEL50k subset for cross-modal knowledge distillation research. The preprocessing is optimized for MST-Distill (Mixture of Specialized Teachers for Cross-Modal Knowledge Distillation) method.

This preprocessing work is based on the VGGSound-AVEL50k subset from: jasongief/CPSP: [2023 TPAMI] Contrastive Positive Sample Propagation along the Audio-Visual Event Line

And related preprocessing works are described in our paper: MST-Distill: Mixture of Specialized Teachers for Cross-Modal Knowledge Distillation | Code


Original Dataset

The original VGGSound dataset and the AVEL50k subset are available at:

  • Original VGGSound: VGGSound Dataset
  • VGGSound-AVEL50k subset: Used in CPSP research for audio-visual event localization

Dataset Information

  • Classes: 141 audio-visual event categories
  • Samples: 48,755 video clips from VGGSound-AVEL50k subset (available at processing time)
  • Modalities: Audio and visual features
  • Content: Audio-visual events with temporal segment labels
  • Optimization: Specifically preprocessed for Cross-modal Knowledge Distillation (CMKD) methods

Preprocessing Details

The preprocessing pipeline consists of four main stages:

1. Data Organization (preprocess_VGGsound50K_0.py)

  • Maps video IDs to file paths and class labels
  • Creates category-to-index mapping for 141 classes
  • Output: VGGS50K_videos.txt with video paths and class labels

2. Feature Extraction Verification (preprocess_VGGsound50K_1.py)

  • Verifies availability of both audio and visual features
  • Ensures corresponding audio and visual features exist
  • Output: VGGS50k_metadata.txt with valid sample names and labels

3. Segment Label Processing (preprocess_VGGsound50K_2.py)

  • Converts segment labels from JSON format to numpy arrays
  • 10-segment temporal labels for each video
  • Output: Individual .npy files for each sample's segment labels

4. Feature Normalization (preprocess_VGGsound50K_3.py)

  • Min-max normalization applied globally across all features
  • Normalizes both audio and visual features
  • Output: Normalized feature files in separate directories

Data Structure

File Structure

VGGS50K/
β”œβ”€β”€ VGGS50K_features_normed/
β”‚   β”œβ”€β”€ audio_features/
β”‚   β”‚   └── *_aFeature.npy          # Normalized audio features
β”‚   └── visual_features/
β”‚       └── *_vFeature.npy          # Normalized visual features
β”œβ”€β”€ seg_labels/
β”‚   └── *_sLabel.npy                # Temporal segment labels [1, 10]
β”œβ”€β”€ VGGS50k_metadata.txt            # Sample names and class labels
└── VGGS50K_videos.txt              # Video paths and class labels

Usage

import numpy as np
import torch

# Load sample data
sample_name = "your_sample_name"
label = 0  # class label

# Load features
audio_features = np.load(f'VGGS50K_features_normed/audio_features/{sample_name}_aFeature.npy')
visual_features = np.load(f'VGGS50K_features_normed/visual_features/{sample_name}_vFeature.npy')
segment_labels = np.load(f'seg_labels/{sample_name}_sLabel.npy')

# Convert to PyTorch tensors
audio_tensor = torch.from_numpy(audio_features.astype(np.float32))
visual_tensor = torch.from_numpy(visual_features.astype(np.float32))

print(f"Audio features shape: {audio_tensor.shape}")
print(f"Visual features shape: {visual_tensor.shape}")
print(f"Segment labels shape: {segment_labels.shape}")

Applications

This preprocessed dataset is optimized for:

  • Audio-visual event localization with temporal segment labels
  • Cross-modal knowledge distillation

Features and Specifications

  • Audio Features: Normalized using global min-max scaling
  • Visual Features: Normalized using global min-max scaling
  • Temporal Resolution: 10-segment labels for event localization
  • Quality: Only samples with complete audio-visual features included

License

This preprocessed dataset maintains the same license as the original VGGSound dataset: Creative Commons Attribution 4.0 International License (CC BY 4.0).


Citation

If you use this preprocessed dataset in your research, please cite the original VGGSound paper and the CPSP paper:

Original VGGSound Citation:

@inproceedings{chen2020vggsound,
  title={Vggsound: A large-scale audio-visual dataset},
  author={Chen, Honglie and Xie, Weidi and Vedaldi, Andrea and Zisserman, Andrew},
  booktitle={ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  pages={721--725},
  year={2020}
}

CPSP Paper Citation:

@article{zhou2022CPSP,
  title={Contrastive Positive Sample Propagation along the Audio-Visual Event Line},
  author={Zhou, Jinxing and Guo, Dan and Wang, Meng},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2022},
  publisher={IEEE}
}

Acknowledgments

We thank the original authors of the VGGSound dataset and the CPSP research team for making these valuable resources available to the research community.

Note to original authors: If you have any concerns or objections regarding this preprocessed dataset, please contact us and we will promptly remove it.