Datasets:
mteb
/

Modalities:
Text
Formats:
parquet
Languages:
Hebrew
ArXiv:
Libraries:
Datasets
pandas
License:

correct_hebrew_grammar

#2
No description provided.
Massive Text Embedding Benchmark org

If you want to fix/or create something, please create new dataset and submit PR to https://github.com/embeddings-benchmark/mteb/

I went to the original project data files:
https://github.com/omilab/Neural-Sentiment-Analyzer-for-Modern-Hebrew/tree/master/data

and took the pre-tokenized data that has all the hebrew prefixes and relation indicators intact.
for example, turning ״ב הצלחה ל אתה ב ה בחירות הבאות״ -> "בהצלחה לך בבחירות הבאות״

lior-at-optune changed pull request status to open
Massive Text Embedding Benchmark org

Will close. As @Samoed we do not take PRs on MTEB datasets. To change the task, you have to change the task in MTEB directly

KennethEnevoldsen changed pull request status to closed

If you want to fix/or create something, please create new dataset and submit PR to https://github.com/embeddings-benchmark/mteb/

I am unsure of where my changes should go for a PR to that repo

Massive Text Embedding Benchmark org

@lior-at-optune You can upload your fixed dataset, and we will add it to the repo

Massive Text Embedding Benchmark org

On huggingface e.g. as {user}/HebrewSentimentAnalysis-v2. Then we can create the v2 on MTEB and re-upload to the MTEB org

if I may ask, since we're already talking, is there a place where the benchmark is saved in detail? something like results per model per task per language, since I want to know how well other models perform on this specific task.

Massive Text Embedding Benchmark org

Yes exactly - I have added an issue on it here: https://github.com/embeddings-benchmark/mteb/issues/3578

Massive Text Embedding Benchmark org

As for the results we have a guide on how to load and work with the results here:
https://embeddings-benchmark.github.io/mteb/usage/loading_results/

All the results is stored in this repository: https://github.com/embeddings-benchmark/results

Massive Text Embedding Benchmark org

Sign up or log in to comment