You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Armed Picky - Pin Sorting Dataset

A LeRobot dataset for training a SO-101 robotic arm to sort colored pins into corresponding targets.

Task Description

Objective: Sort 2 colored pins into their matching targets

Pin Color Target
Blue Circle (tape)
Green Square (box)

Language instruction: "Pick up the blue pin and place it in the circle. Pick up the green pin and place it in the square."

Dataset Details

Property Value
Robot SO-101 Follower
Episodes 50
Total Frames 67,085
FPS 30
Cameras 2 (front + side)
Resolution 640x480
Video Codec AV1

Data Structure

Actions (6 DOF):

  • shoulder_pan, shoulder_lift, elbow_flex
  • wrist_flex, wrist_roll, gripper

Observations:

  • observation.state - 6 joint positions
  • observation.images.front - front camera (640x480)
  • observation.images.side - side camera (640x480)

πŸ€– Trained Models (14 Total)

We provide 14 trained models comparing 3 policies (ACT, Diffusion, Smol VLA) with different training configurations:

  • From Scratch: Random initialization
  • Foundation Models: Pre-trained vision encoders or full models

ACT Models (2)

Config Steps Model Repository Foundation
Scratch 60K epochdev/act-scratch-60k -
ViT 60K epochdev/act-vit-60k ImageNet ViT

Diffusion Models (6)

Config Steps Model Repository Foundation
Scratch 20K epochdev/diffusion-scratch-20k -
Scratch 40K epochdev/diffusion-scratch-40k -
Scratch 60K epochdev/diffusion-scratch-60k -
R3M 20K epochdev/diffusion-r3m-20k R3M Vision Encoder
R3M 40K epochdev/diffusion-r3m-40k R3M Vision Encoder
R3M 60K epochdev/diffusion-r3m-60k R3M Vision Encoder

Smol VLA Models (6)

Config Steps Model Repository Foundation
Scratch 20K epochdev/smolvla-scratch-20k -
Scratch 40K epochdev/smolvla-scratch-40k -
Scratch 60K epochdev/smolvla-scratch-60k -
OpenVLA 20K epochdev/smolvla-openvla-20k OpenVLA-7B
OpenVLA 40K epochdev/smolvla-openvla-40k OpenVLA-7B
OpenVLA 60K epochdev/smolvla-openvla-60k OpenVLA-7B

πŸ“š Usage

Quick Start - Loading the Dataset

from lerobot.common.datasets.lerobot_dataset import LeRobotDataset

dataset = LeRobotDataset("chvainickas/armed-picky-cleaned")

Setup

# Install LeRobot
pip install -q lerobot

# Login to HuggingFace
from huggingface_hub import login
login()  # Enter your token

Convert Dataset (First Time Only)

# Convert to LeRobot v3.0 format
rm -rf ~/.cache/huggingface/lerobot/chvainickas/armed-picky-cleaned
python -m lerobot.datasets.v30.convert_dataset_v21_to_v30 --repo-id=chvainickas/armed-picky-cleaned

πŸš€ Training Models

ACT Policy

From Scratch (60K steps)

lerobot-train \
  --dataset.repo_id=chvainickas/armed-picky-cleaned \
  --policy=act \
  --steps=60000 \
  --output_dir=outputs/act-scratch-60k \
  --policy.repo_id=epochdev/act-scratch-60k \
  --policy.device=cuda \
  --wandb.enable=false \
  --optimizer.type=adamw \
  --optimizer.lr=1e-5 \
  --optimizer.weight_decay=0.0001

With ImageNet ViT Foundation (60K steps)

lerobot-train \
  --dataset.repo_id=chvainickas/armed-picky-cleaned \
  --policy=act \
  --policy.use_pretrained_vision=true \
  --policy.pretrained_backbone_weights=google/vit-base-patch16-224 \
  --steps=60000 \
  --output_dir=outputs/act-vit-60k \
  --policy.repo_id=epochdev/act-vit-60k \
  --policy.device=cuda \
  --wandb.enable=false \
  --optimizer.type=adamw \
  --optimizer.lr=1e-5

Diffusion Policy (Incremental Training)

From Scratch: 0→20K→40K→60K

# Step 1: Train 0β†’20K
lerobot-train \
  --dataset.repo_id=chvainickas/armed-picky-cleaned \
  --policy=diffusion \
  --steps=20000 \
  --output_dir=outputs/diffusion-scratch-20k \
  --policy.repo_id=epochdev/diffusion-scratch-20k \
  --policy.device=cuda \
  --wandb.enable=false \
  --optimizer.lr=1e-4

# Step 2: Resume 20K→40K
lerobot-train \
  --dataset.repo_id=chvainickas/armed-picky-cleaned \
  --policy=diffusion \
  --steps=40000 \
  --output_dir=outputs/diffusion-scratch-40k \
  --policy.repo_id=epochdev/diffusion-scratch-40k \
  --policy.device=cuda \
  --wandb.enable=false \
  --optimizer.lr=1e-4 \
  --resume=true \
  --pretrained_policy_path=outputs/diffusion-scratch-20k/checkpoints/last/pretrained_model

# Step 3: Resume 40K→60K
lerobot-train \
  --dataset.repo_id=chvainickas/armed-picky-cleaned \
  --policy=diffusion \
  --steps=60000 \
  --output_dir=outputs/diffusion-scratch-60k \
  --policy.repo_id=epochdev/diffusion-scratch-60k \
  --policy.device=cuda \
  --wandb.enable=false \
  --optimizer.lr=1e-4 \
  --resume=true \
  --pretrained_policy_path=outputs/diffusion-scratch-40k/checkpoints/last/pretrained_model

With R3M Foundation: 0→20K→40K→60K

# Step 1: Train 0β†’20K with R3M
lerobot-train \
  --dataset.repo_id=chvainickas/armed-picky-cleaned \
  --policy=diffusion \
  --policy.use_pretrained_vision=true \
  --policy.pretrained_backbone=r3m \
  --steps=20000 \
  --output_dir=outputs/diffusion-r3m-20k \
  --policy.repo_id=epochdev/diffusion-r3m-20k \
  --policy.device=cuda \
  --wandb.enable=false \
  --optimizer.lr=5e-5

# Step 2: Resume 20K→40K
lerobot-train \
  --dataset.repo_id=chvainickas/armed-picky-cleaned \
  --policy=diffusion \
  --steps=40000 \
  --output_dir=outputs/diffusion-r3m-40k \
  --policy.repo_id=epochdev/diffusion-r3m-40k \
  --policy.device=cuda \
  --wandb.enable=false \
  --optimizer.lr=5e-5 \
  --resume=true \
  --pretrained_policy_path=outputs/diffusion-r3m-20k/checkpoints/last/pretrained_model

# Step 3: Resume 40K→60K
lerobot-train \
  --dataset.repo_id=chvainickas/armed-picky-cleaned \
  --policy=diffusion \
  --steps=60000 \
  --output_dir=outputs/diffusion-r3m-60k \
  --policy.repo_id=epochdev/diffusion-r3m-60k \
  --policy.device=cuda \
  --wandb.enable=false \
  --optimizer.lr=5e-5 \
  --resume=true \
  --pretrained_policy_path=outputs/diffusion-r3m-40k/checkpoints/last/pretrained_model

Smol VLA (Incremental Training)

From Scratch: 0→20K→40K→60K

# Step 1: Train 0β†’20K
lerobot-train \
  --dataset.repo_id=chvainickas/armed-picky-cleaned \
  --policy=vla \
  --steps=20000 \
  --output_dir=outputs/smolvla-scratch-20k \
  --policy.repo_id=epochdev/smolvla-scratch-20k \
  --policy.device=cuda \
  --wandb.enable=false \
  --optimizer.lr=2e-5

# Step 2: Resume 20K→40K
lerobot-train \
  --dataset.repo_id=chvainickas/armed-picky-cleaned \
  --policy=vla \
  --steps=40000 \
  --output_dir=outputs/smolvla-scratch-40k \
  --policy.repo_id=epochdev/smolvla-scratch-40k \
  --policy.device=cuda \
  --wandb.enable=false \
  --optimizer.lr=2e-5 \
  --resume=true \
  --pretrained_policy_path=outputs/smolvla-scratch-20k/checkpoints/last/pretrained_model

# Step 3: Resume 40K→60K
lerobot-train \
  --dataset.repo_id=chvainickas/armed-picky-cleaned \
  --policy=vla \
  --steps=60000 \
  --output_dir=outputs/smolvla-scratch-60k \
  --policy.repo_id=epochdev/smolvla-scratch-60k \
  --policy.device=cuda \
  --wandb.enable=false \
  --optimizer.lr=2e-5 \
  --resume=true \
  --pretrained_policy_path=outputs/smolvla-scratch-40k/checkpoints/last/pretrained_model

Fine-tuned from OpenVLA: 0→20K→40K→60K

# Step 1: Fine-tune 0β†’20K from OpenVLA
lerobot-train \
  --dataset.repo_id=chvainickas/armed-picky-cleaned \
  --policy=vla \
  --policy.pretrained_model=openvla/openvla-7b \
  --steps=20000 \
  --output_dir=outputs/smolvla-openvla-20k \
  --policy.repo_id=epochdev/smolvla-openvla-20k \
  --policy.device=cuda \
  --wandb.enable=false \
  --optimizer.lr=5e-6

# Step 2: Continue 20K→40K
lerobot-train \
  --dataset.repo_id=chvainickas/armed-picky-cleaned \
  --policy=vla \
  --steps=40000 \
  --output_dir=outputs/smolvla-openvla-40k \
  --policy.repo_id=epochdev/smolvla-openvla-40k \
  --policy.device=cuda \
  --wandb.enable=false \
  --optimizer.lr=5e-6 \
  --resume=true \
  --pretrained_policy_path=outputs/smolvla-openvla-20k/checkpoints/last/pretrained_model

# Step 3: Continue 40K→60K
lerobot-train \
  --dataset.repo_id=chvainickas/armed-picky-cleaned \
  --policy=vla \
  --steps=60000 \
  --output_dir=outputs/smolvla-openvla-60k \
  --policy.repo_id=epochdev/smolvla-openvla-60k \
  --policy.device=cuda \
  --wandb.enable=false \
  --optimizer.lr=5e-6 \
  --resume=true \
  --pretrained_policy_path=outputs/smolvla-openvla-40k/checkpoints/last/pretrained_model

πŸ“€ Upload Checkpoints

from huggingface_hub import HfApi

api = HfApi()
api.upload_folder(
    folder_path="outputs/act-scratch-60k/checkpoints/last/pretrained_model",
    repo_id="epochdev/act-scratch-60k",
    repo_type="model",
)

πŸ€– Running on Robot

lerobot-record \
  --robot.type=so101_follower \
  --robot.port=/dev/ttyACM0 \
  --robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}, side: {type: opencv, index_or_path: 1, width: 640, height: 480, fps: 30}}" \
  --dataset.single_task="Pick up the blue pin and place it in the circle. Pick up the green pin and place it in the square." \
  --policy.path=epochdev/smolvla-openvla-60k

πŸ”¬ Model Comparison Study

All 14 models are trained on the same dataset with the same 60K step budget to enable fair comparison:

Research Questions:

  1. Does using foundation models improve performance?
  2. Which policy architecture works best for this task?
  3. How does performance scale with training steps (20K vs 40K vs 60K)?
  4. When do foundation models help most (early vs late training)?

Foundation Models Used:

  • ACT: ImageNet pre-trained Vision Transformer (ViT)
  • Diffusion: R3M (robot-specific vision encoder)
  • Smol VLA: OpenVLA-7B (full VLA model trained on Open X-Embodiment)

πŸ’‘ Training Tips

Google Colab:

  • Use T4 GPU (free tier) or A100 (Colab Pro)
  • Save outputs to Google Drive to prevent data loss
  • Enable mixed precision with --fp16=true for faster training

Resume Training:

  • All models support checkpoint resumption with --resume=true
  • Use --pretrained_policy_path to specify checkpoint location
  • Incremental training (20Kβ†’40Kβ†’60K) is more efficient than training from scratch

Experiment Tracking:

  • Remove --wandb.enable=false and set wandb API key to track experiments
  • Monitor training curves to detect overfitting or convergence

πŸ“Š Origin

Cleaned version of Amirzon10/armed-picky with 1 bad episode removed and indices renumbered.

πŸ“„ License

MIT

Downloads last month
66

Models trained or fine-tuned on chvainickas/armed-picky-cleaned