File size: 4,932 Bytes
17c374d e19e265 2d2ece9 e19e265 2d2ece9 e19e265 2d2ece9 e19e265 ff3cd84 e19e265 ff3cd84 e19e265 ff3cd84 2d2ece9 e19e265 17c374d 2576ad0 aae7cb4 17c374d aae7cb4 17c374d bd3bf64 aae7cb4 b9a849f 17c374d b9a849f aae7cb4 b9a849f aae7cb4 2576ad0 17c374d aae7cb4 2576ad0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 |
---
license: cc-by-nc-4.0
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: image
dtype: image
- name: query
dtype: string
- name: relevant
dtype: int64
- name: clip_score
dtype: float64
- name: inat24_image_id
dtype: int64
- name: inat24_file_name
dtype: string
- name: supercategory
dtype: string
- name: category
dtype: string
- name: iconic_group
dtype: string
- name: inat24_species_id
dtype: int64
- name: inat24_species_name
dtype: string
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: location_uncertainty
dtype: float64
- name: date
dtype: string
- name: license
dtype: string
- name: rights_holder
dtype: string
splits:
- name: validation
num_bytes: 293789663.0
num_examples: 4000
- name: test
num_bytes: 1694429058.0
num_examples: 16000
download_size: 1879381267
dataset_size: 1988218721.0
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# INQUIRE-Rerank
[๐ Website](https://inquire-benchmark.github.io/) | [๐ Paper](https://arxiv.org/abs/2411.02537) | [GitHub](https://github.com/inquire-benchmark/INQUIRE)
<!--  -->
<!-- <img src="https://cdn-uploads.huggingface.co/production/uploads/630b1e44cd26ad7f60d490e2/CIFPqSwwkSSZo0zMoQOCr.jpeg" style="width:100%;max-width:1000px"/> -->
<!-- **INQUIRE: A Natural World Text-to-Image Retrieval Benchmark** -->
_๐ฏ **How do we empower scientific discovery in millions of nature photos?**_
INQUIRE is a text-to-image retrieval benchmark designed to challenge multimodal models with expert-level queries about the natural world. This dataset aims to emulate real world image retrieval and analysis problems faced by scientists working with large-scale image collections.
Therefore, we hope that INQUIRE will both encourage and track advancements in the real scientific utility of AI systems.

**Dataset Details**
The **INQUIRE-Rerank** is created from 250 expert-level queries. This task fixes an initial ranking of 100 images per query, obtained using CLIP ViT-H-14 zero-shot retrieval on the entire 5 million image iNat24 dataset.
The challenge is to rerank all 100 images for each query with the goal of assigning high scores to the relevant images (there are potentially many relevant images for each query). This fixed starting point makes reranking evaluation consistent, and saves time from running the initial retrieval yourself. If you're interested in full-dataset retrieval,
check out **INQUIRE-Fullrank** available from the github repo.
**Loading the Dataset**
To load the dataset using HugginFace `datasets`, you first need to `pip install datasets`, then run the following code:
```
from datasets import load_dataset
inquire_rerank = load_dataset("evendrow/INQUIRE-Rerank", split="validation") # or "test"
```
**Running Baselines**
We publish code to run baselines for reranking with CLIP models and reranking with Large Vision-Language Models. The code is available in our repository here: [https://github.com/inquire-benchmark/INQUIRE](https://github.com/inquire-benchmark/INQUIRE).
**Dataset Sources**
INQUIRE and iNat24 were created by a group of researchers from the following affiliations: iNaturalist, the Massachusetts Institute of Technology, University College London, University of Edinburgh, and University of Massachusetts Amherst
- **Queries and Relevance Annotation**: All image annotations were performed by a small set of individuals whose interest and familiarity with wildlife image collections enabled them to provide accurate labels for challenging queries.
- **Images and Species Labels**: The images and species labels used in INQUIRE were sourced from data made publicly available by the citizen science platform iNaturalist in the years 2021, 2022, or 2023.
**Licensing Information**
We release INQUIRE under the **CC BY-NC 4.0** license. We also include with each image its respective license information and rights holder. We note that all images in our dataset have suitable licenses for research use.
**Additional Details**
For additional details, check out our paper, [INQUIRE: A Natural World Text-to-Image Retrieval Benchmark](https://arxiv.org/abs/2411.02537)
**Citation Information**
```
@article{vendrow2024inquire,
title={INQUIRE: A Natural World Text-to-Image Retrieval Benchmark},
author={Vendrow, Edward and Pantazis, Omiros and Shepard, Alexander and Brostow, Gabriel and Jones, Kate E and Mac Aodha, Oisin and Beery, Sara and Van Horn, Grant},
journal={NeurIPS},
year={2024},
}
``` |