Datasets:
correct_hebrew_grammar
If you want to fix/or create something, please create new dataset and submit PR to https://github.com/embeddings-benchmark/mteb/
I went to the original project data files:
https://github.com/omilab/Neural-Sentiment-Analyzer-for-Modern-Hebrew/tree/master/data
and took the pre-tokenized data that has all the hebrew prefixes and relation indicators intact.
for example, turning ״ב הצלחה ל אתה ב ה בחירות הבאות״ -> "בהצלחה לך בבחירות הבאות״
Will close. As @Samoed we do not take PRs on MTEB datasets. To change the task, you have to change the task in MTEB directly
If you want to fix/or create something, please create new dataset and submit PR to https://github.com/embeddings-benchmark/mteb/
I am unsure of where my changes should go for a PR to that repo
On huggingface e.g. as {user}/HebrewSentimentAnalysis-v2. Then we can create the v2 on MTEB and re-upload to the MTEB org
if I may ask, since we're already talking, is there a place where the benchmark is saved in detail? something like results per model per task per language, since I want to know how well other models perform on this specific task.
Yes exactly - I have added an issue on it here: https://github.com/embeddings-benchmark/mteb/issues/3578
As for the results we have a guide on how to load and work with the results here:
https://embeddings-benchmark.github.io/mteb/usage/loading_results/
All the results is stored in this repository: https://github.com/embeddings-benchmark/results