Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
likes: string
comments: string
shares: string
bookmarks: string
views: string
description: string
musicTitle: string
date: timestamp[s]
author: string
tags: list<item: string>
videoUrl: string
comments_tree: list<item: struct<text: string, replies: list<item: struct<text: string>>>>
video_s3_path: string
vs
likes: string
comments: string
shares: string
bookmarks: string
views: string
description: string
musicTitle: string
date: string
author: string
tags: list<item: string>
videoUrl: string
comments_tree: list<item: struct<text: string, replies: list<item: struct<text: string>>>>
video_s3_path: string
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 531, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              likes: string
              comments: string
              shares: string
              bookmarks: string
              views: string
              description: string
              musicTitle: string
              date: timestamp[s]
              author: string
              tags: list<item: string>
              videoUrl: string
              comments_tree: list<item: struct<text: string, replies: list<item: struct<text: string>>>>
              video_s3_path: string
              vs
              likes: string
              comments: string
              shares: string
              bookmarks: string
              views: string
              description: string
              musicTitle: string
              date: string
              author: string
              tags: list<item: string>
              videoUrl: string
              comments_tree: list<item: struct<text: string, replies: list<item: struct<text: string>>>>
              video_s3_path: string

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for TikTok Harmful Video Dataset (Vietnamese)

Dataset Details

Dataset Description

This dataset contains TikTok videos collected for research on harmful content detection in Vietnamese. Each sample is stored as a folder with:

  • the original video file (video.mp4)
  • a metadata file (metadata.json)

The dataset is designed for multimodal learning (video + audio + text from metadata).

  • Curated by: Student research project (IE212 – Big Data, UIT, VNU-HCM)
  • Language(s): Vietnamese
  • License: CC BY-NC 4.0 (non-commercial research/education use)

Dataset Sources

  • Source platform: TikTok
  • Collection method: Automated crawling (keyword/hashtag-based), then manual filtering

Uses

Direct Use

You can use this dataset for:

  • Video classification (e.g., Safe vs Not Safe)
  • Multimodal research (video/audio/text)
  • Feature extraction pipelines (frames, audio waveform, captions)

Out-of-Scope Use

This dataset is not intended for:

  • Commercial use
  • Decisions impacting individuals (e.g., banning accounts, legal enforcement)
  • Identifying or profiling TikTok users

Dataset Structure

Data is stored using a simple folder-per-video layout:

{video_id}/
β”œβ”€β”€ video.mp4
└── metadata.json

Files

  • video.mp4: raw TikTok video
  • metadata.json: JSON describing the video (caption/hashtags/stats/etc., depending on what your crawler saved)

Note: This dataset does not necessarily include official train/val/test splits.

How to Use (Python)

Below are simple examples to load and iterate the dataset from a local folder clone.

1. List samples and read metadata

import json
from pathlib import Path

dataset_dir = Path("PATH_TO_DATASET_ROOT")  # e.g. "./tiktok_dataset"

video_folders = [p for p in dataset_dir.iterdir() if p.is_dir()]
print("Total samples:", len(video_folders))

# Read first sample
sample_dir = video_folders[0]
video_path = sample_dir / "video.mp4"
meta_path = sample_dir / "metadata.json"

with meta_path.open("r", encoding="utf-8") as f:
    meta = json.load(f)

print("Sample video_id:", sample_dir.name)
print("Video path:", video_path)
print("Metadata keys:", list(meta.keys()))

2. Build a simple manifest (CSV/JSONL) for training

This is useful if you want a single file listing all samples.

import json
import csv
from pathlib import Path

dataset_dir = Path("PATH_TO_DATASET_ROOT")
out_csv = Path("manifest.csv")

rows = []
for d in dataset_dir.iterdir():
    if not d.is_dir():
        continue
    video_path = d / "video.mp4"
    meta_path = d / "metadata.json"
    if not video_path.exists() or not meta_path.exists():
        continue

    meta = json.loads(meta_path.read_text(encoding="utf-8"))
    caption = meta.get("caption") or meta.get("desc") or ""

    # If you have labels inside metadata.json, try:
    # label = meta.get("label")  # e.g. "safe" / "not_safe"
    # Otherwise set it to empty and label later.
    label = meta.get("label", "")

    rows.append({
        "video_id": d.name,
        "video_path": str(video_path),
        "caption": caption,
        "label": label,
    })

with out_csv.open("w", newline="", encoding="utf-8") as f:
    writer = csv.DictWriter(f, fieldnames=["video_id", "video_path", "caption", "label"])
    writer.writeheader()
    writer.writerows(rows)

print("Wrote:", out_csv, "rows =", len(rows))

3. Extract audio from MP4 (optional)

If you want audio for ASR or audio embeddings, you can extract WAV using ffmpeg.

import subprocess
from pathlib import Path

video_path = Path("PATH_TO_A_VIDEO.mp4")
out_wav = video_path.with_suffix(".wav")

cmd = [
    "ffmpeg", "-y",
    "-i", str(video_path),
    "-ac", "1",          # mono
    "-ar", "16000",      # 16kHz
    str(out_wav)
]
subprocess.run(cmd, check=True)
print("Saved:", out_wav)

4) Read frames with OpenCV (optional)

import cv2

video_path = "PATH_TO_A_VIDEO.mp4"
cap = cv2.VideoCapture(video_path)

frames = []
max_frames = 16

while len(frames) < max_frames:
    ok, frame = cap.read()
    if not ok:
        break
    frames.append(frame)

cap.release()
print("Extracted frames:", len(frames))

Dataset Creation

Curation Rationale

The dataset was created to support Vietnamese-focused research on harmful content detection for short-form videos. It supports multimodal modeling by combining raw video with metadata text.

Data Collection and Processing

  • Videos were collected using keyword/hashtag-based crawling.
  • Broken/duplicate items were filtered out.
  • Each sample is stored as {video_id}/video.mp4 + {video_id}/metadata.json.

Annotations (if applicable)

If your dataset includes labels:

  • Classes: Safe / Not Safe
  • Method: Manual labeling using project guidelines

(If you do not publish labels, you can remove this part.)

Personal and Sensitive Information

Since the source is social media videos, the dataset may contain personal or sensitive content present in the original videos. No extra personal data was added by the dataset curators. Use the dataset only for non-commercial research and follow ethical data handling practices.

Bias, Risks, and Limitations

  • The dataset may reflect TikTok platform bias (recommendation, trends, sampling).
  • Harmful content definitions may be subjective and context-dependent.
  • The dataset may not represent all demographics or topics equally.

Recommendations

  • Use this dataset for research/education only.
  • Do not use it as the sole basis for real-world moderation decisions.
  • Report limitations and potential bias when publishing results.

Dataset Card Contact

Please open an issue in the dataset repository for questions or problems.

Downloads last month
6