reproducing-cross-encoders
Collection
A set of cross-encoders trained from various backbones and losses for equal comparison • 55 items • Updated
• 3
This model is a cross-encoder based on microsoft/MiniLM-L12-H384-uncased. It was trained on Ms-Marco using loss bce as part of a reproducibility paper for training cross encoders: "Reproducing and Comparing Distillation Techniques for Cross-Encoders", see the paper for more details.
This model is intended for re-ranking the top results returned by a retrieval system (like BM25, Bi-Encoders or SPLADE).
Training can be easily reproduced using the assiciated repository. The exact training configuration used for this model is also detailed in config.yaml.
Quick Start:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/MiniLM-L12-H384-uncased")
model = AutoModelForSequenceClassification.from_pretrained("xpmir/cross-encoder-MiniLM-L12-BCE")
features = tokenizer("What is experimaestro ?", "Experimaestro is a powerful framework for ML experiments management...", padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
We provide evaluations of this cross-encoder re-ranking the top 1000 documents retrieved by naver/splade-v3-distilbert.
| dataset | RR@10 | nDCG@10 |
|---|---|---|
| msmarco_dev | 37.86 | 44.20 |
| trec2019 | 98.06 | 68.86 |
| trec2020 | 91.51 | 69.35 |
| fever | 74.84 | 75.68 |
| arguana | 22.98 | 33.77 |
| climate_fever | 27.01 | 19.69 |
| dbpedia | 66.92 | 39.41 |
| fiqa | 43.39 | 35.12 |
| hotpotqa | 84.86 | 68.52 |
| nfcorpus | 51.70 | 31.16 |
| nq | 49.36 | 54.80 |
| quora | 61.96 | 66.04 |
| scidocs | 25.52 | 14.31 |
| scifact | 64.86 | 68.10 |
| touche | 58.12 | 31.28 |
| trec_covid | 82.41 | 59.39 |
| robust04 | 67.67 | 44.71 |
| lotte_writing | 63.17 | 54.71 |
| lotte_recreation | 58.43 | 52.92 |
| lotte_science | 41.01 | 33.83 |
| lotte_technology | 49.53 | 41.23 |
| lotte_lifestyle | 70.19 | 60.76 |
| Mean In Domain | 75.81 | 60.80 |
| BEIR 13 | 54.92 | 45.94 |
| LoTTE (OOD) | 58.33 | 48.03 |
Base model
microsoft/MiniLM-L12-H384-uncased