You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

If you find our method/dataset helpful, please consider citing our paper:

@inproceedings{ma2025large,
title={A Large-Scale Dataset of Gaussian Splats and Their Self-Supervised Pretraining},
author={Ma, Qi and Li, Yue and Ren, Bin and Sebe, Nicu and Konukoglu, Ender and Gevers, Theo and Van Gool, Luc and Paudel, Danda Pani},
booktitle={2025 International Conference on 3DV},
pages={145--155},
year={2025},
organization={IEEE}
}

Log in or Sign Up to review the conditions and access this dataset content.

2D Image/Depth Rendering of Objaverse Dataset

In total, the rendered split contains 167,857 objects. The object ids are in the completed_renders.txt file. After unzipping, the image/depth renders are in the following folder strunture:

# e.g.,
000-000/000074a334c541878360457c672b6c2e
├── depth.zip
├── image.zip
├── metadata.json
└── transforms_train.json

Camera Intrinsics

  • 72 views per-object, uniformly sampled on the upper hemisphere
  • Image dimensions: 400×400
  • Intrinsics matrix:
    [[1250    0   200  ]
     [  0   1250 200  ]
     [  0     0     1   ]]
    
  • Extrinsics: saved in the "transform_matrix" key in the format of camera-to-world.

For 2D rendering, we save per-view frame information in the transforms_train.json for each object, in the format of:

{
    "camera_angle_x": 0.005538113186738931,
    "camera_angle_y": 0.004168855934281471,
    "fl_x": 1250.0,
    "fl_y": 1250.0,
    "k1": 0.0,
    "k2": 0.0,
    "p1": 0.0,
    "p2": 0.0,
    "cx": 200.0,
    "cy": 200.0,
    "w": 400,
    "h": 400,
    "aabb_scale": 4,
    "frames": [
        {
            "file_path": "image/000",
            "rotation": 1.9416110387254664,
            "transform_matrix": [
                [
                    0.3623749315738678,
                    0.925559937953949,
                    -0.10965016484260559,
                    -0.32895052433013916
                ],
                [
                    -0.9320324659347534,
                    0.3598583936691284,
                    -0.04263206571340561,
                    -0.12789620459079742
                ],
                [
                    0.0,
                    0.11764649301767349,
                    0.993055522441864,
                    2.9791667461395264
                ],
                [
                    0.0,
                    0.0,
                    0.0,
                    1.0
                ]
            ],
            "elevation": 83.24371388761189,
            "azimuth": 111.24611797498106
        },
        { // ... more frames for this object

RBG Image Reading

  • Format: PNG files (000.png, 001.png, ...)
  • Image in RGBA format (with alpha channel)
  • Alpha channel cam be used for background masking

Depth Map Reading

  • Format: 4-channel RGBA PNG files from Blender (000.png, 001.png, ...)
  • Note: Only the first channel (R) contains depth data
  • Blender saves depth as inverted values: [0,8] meters → [1,0] normalized
  • Script remaps back to linear depth: depth_linear = depth_min + (1.0 - depth_img) * (depth_max - depth_min)
  • Background pixels have depth values close to 1.0 (far plane)
# Extract first channel from 4-channel depth
depth_img = depth_img_raw[:, :, 0]
# Convert uint8 depth to float normalized to [0, 1]
depth_img = depth_img.astype(np.float32) / 255.0
# Note: Blender remaps [0, 8] to [1, 0]
# Remap depth values from [1, 0] back to [depth_min, depth_max]
depth_min, depth_max = 0, 8
depth_linear = depth_min + (1.0 - depth_img) * (depth_max - depth_min)
valid_mask = (depth_linear > 0.001) & (depth_linear < depth_max - 0.001)
background_mask = depth_img > 0.999
valid_mask = valid_mask & ~background_mask
Downloads last month
12