video video | label class label |
|---|---|
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
1observation.images.logitech_2 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 | |
0observation.images.logitech_1 |
BotFails: A Multimodal Dataset for Robotic Failure Detection
Overview
BotFails is a novel dataset specifically designed to support research on general failure detection in robotic manipulation. Addressing the scarcity of publicly available benchmarks in this domain, BotFails provides multimodal observations — including vision, proprioception, and natural language task instructions — collected across a semantically diverse set of manipulation scenarios.
Data collection was performed via master arm teleoperation using LeRobot, a real-world robotic platform. The dataset has been curated to maximize diversity across task types, anomaly categories, and operational contexts.
Dataset Description
BotFails covers 10 distinct manipulation tasks spanning two operational domains:
| Domain | Tasks |
|---|---|
| Domestic (6 tasks) | Dish Tidy-Up, Groceries Sorting, Making Coffee, Pouring Coffee, Set the Table, Vegetables & Fruits Sorting |
| Industrial (4 tasks) | Robothon – Buttons, Robothon – Hatch & Probe, Screws Sorting, Soldering |
Each task includes multiple episode-level annotations identifying the presence and type of anomaly. Anomaly types include both:
- Non-failure anomalies — e.g., an unknown object is present in the scene.
- Genuine failures — e.g., manipulation mistakes, semantic mistakes, and other execution errors.
Repository Structure
The dataset is organized into three top-level directories:
BotFails/
├── labels/ # Per-episode CSV annotation files (anomaly labels)
├── normal_train/ # Expert demonstrations (nominal behavior, for training)
└── test/ # Anomalous/Expert tes episodes (for evaluation)
labels/
Contains per-episode CSV files providing frame-level or episode-level anomaly annotations for each task. Each subdirectory corresponds to one task (e.g., domotic_dishTidyUp_anomaly/) and contains one CSV file per episode.
normal_train/
Contains nominal expert demonstrations used as reference behavior during training. Each task subdirectory follows the LeRobot dataset format:
<task_name>_expert/
├── data/
│ └── chunk-000/ # Parquet observation files
├── meta/
│ ├── episodes.jsonl # Episode metadata
│ ├── info.json # Dataset configuration
│ ├── stats.json # Observation statistics
│ └── tasks.jsonl # Task language instructions
└── videos/
└── chunk-000/ # RGB video streams (mp4)
test/
Contains anomalous episodes used for evaluation. Follows the same LeRobot format as normal_train/, with anomaly type detailed in the corresponding labels/ entries.
Data Format
Each split is stored in the LeRobot dataset format, which includes:
- Observations: robot joint positions, end-effector states, and multi-view RGB frames stored as Parquet files.
- Videos: synchronized video recordings of each episode.
- Metadata: episode-level information, dataset statistics, and natural language task descriptions (
tasks.jsonl). - Labels: per-episode CSV files indicating anomaly presence, timing, and category.
Episode Counts per Task
| Task | Normal Train Episodes | Test Episodes | Label Files |
|---|---|---|---|
| domotic_dishTidyUp | 20 | 15 | 15 |
| domotic_groceriesSorting | 20 | 10 | 10 |
| domotic_makingCoffee | 20 | 15 | 15 |
| domotic_pouringCoffee | 20 | 20 | 20 |
| domotic_setTheTable | 18 | 14 | 14 |
| domotic_vegetablesAndFruitsSorting | 20 | 20 | 20 |
| industrial_robothon_buttons | 15 | 10 | 10 |
| industrial_robothon_hatchAndProbe | 10 | 15 | 15 |
| industrial_screws_sorting | 20 | 10 | 10 |
| industrial_soldering | 15 | 15 | 15 |
Total: 144 annotated anomalous episodes across 10 tasks, plus corresponding nominal demonstrations.
Intended Use
BotFails is intended to support research in:
- Robotic failure detection and anomaly localization
- Out-of-distribution detection in robot learning
- Multimodal representation learning for manipulation
- Task-conditioned anomaly detection leveraging natural language instructions
Limitations
- All data was collected in controlled laboratory environments using a single robotic platform.
- Generalization to other hardware configurations or environments may require additional adaptation.
- Anomaly labels reflect human expert judgment and may not be exhaustive.
Citation
If you use BotFails in your research, please cite:
@inproceedings{rolland2026failure,
title={Failure Identification in Imitation Learning via Statistical and Semantic Filtering},
author={Rolland, Quentin and Mayran de Chamisso, Fabrice and Mouret, Jean-Baptiste},
booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
year={2026},
}
Acknowledgements
Data collection was performed using the LeRobot framework developed by Hugging Face.
@misc{cadene2024lerobot,
author = {Cadene, Remi and Alibert, Simon and Soare, Alexander and Gallouedec, Quentin and Zouitine, Adil and Palma, Steven and Kooijmans, Pepijn and Aractingi, Michel and Shukor, Mustafa and Aubakirova, Dana and Russi, Martino and Capuano, Francesco and Pascal, Caroline and Choghari, Jade and Moss, Jess and Wolf, Thomas},
title = {LeRobot: State-of-the-art Machine Learning for Real-World Robotics in Pytorch},
howpublished = "\url{https://github.com/huggingface/lerobot}",
year = {2024}
}
- Downloads last month
- -