Datasets:
Update eval.yaml
The new eval.yaml looks perfect now. Thanks for your help! Merging now.
thanks ! Do you think it could be possible to open PRs on the model repos you evaluated with the results from your leaderboard so that we can populate the benchmark leaderboard ?
Here is how to do it: https://huggingface.co/docs/hub/eval-results
Hi @SaylorTwift , thanks for the suggestion! I've added the .eval_results/evasionbench.yaml to https://huggingface.co/FutureMa/Eva-4B-V2 with the benchmark results.
However, I noticed that the Dataset Viewer on https://huggingface.co/datasets/FutureMa/EvasionBench is currently showing "The dataset viewer should be available soon. Please retry later." — this seems to be related
to the eval.yaml validation.
Regarding submitting PRs to other model repos with their evaluation results — I'll work on that.
hey ! the viewer issue is fixed.
Regarding submitting PRs to other model repos with their evaluation results — I'll work on that.
That's super nice, ping me if you need any help :)
Hi @SaylorTwift ,
I've submitted PRs with EvasionBench evaluation results to the following open-source model repos:
- GLM-4.7: https://huggingface.co/zai-org/GLM-4.7/discussions/43
- Qwen3-Coder: https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct/discussions/31
- MiniMax-M2.1: https://huggingface.co/MiniMaxAI/MiniMax-M2.1/discussions/27
- DeepSeek-V3.2: https://huggingface.co/deepseek-ai/DeepSeek-V3.2/discussions/42
- Kimi-K2: https://huggingface.co/moonshotai/Kimi-K2-Instruct-0905/discussions/23
Thanks for the suggestion! Let me know if anything needs to be adjusted.