burtenshaw HF Staff commited on
Commit
2765a66
·
verified ·
1 Parent(s): 602d01e

Add MMLU-Pro evaluation result

Browse files

## Evaluation Results

This PR adds structured evaluation results using the new [`.eval_results/` format](https://huggingface.co/docs/hub/eval-results).

### What This Enables

- **Model Page**: Results appear on the model page with benchmark links
- **Leaderboards**: Scores are aggregated into benchmark dataset leaderboards
- **Verification**: Support for cryptographic verification of evaluation runs

![Model Evaluation Results](https://huggingface.co/huggingface/documentation-images/resolve/main/evaluation-results/eval-results-previw.png)

### Format Details

Results are stored as YAML in `.eval_results/` folder. See the [Eval Results Documentation](https://huggingface.co/docs/hub/eval-results) for the full specification.

---
*Generated by [community-evals](https://github.com/huggingface/community-evals)*

Files changed (1) hide show
  1. .eval_results/mmlu_pro.yaml +3 -2
.eval_results/mmlu_pro.yaml CHANGED
@@ -1,7 +1,8 @@
1
  - dataset:
2
  id: TIGER-Lab/MMLU-Pro
 
3
  value: 84.3
4
- date: '2026-01-15'
5
  source:
6
  url: https://huggingface.co/zai-org/GLM-4.7
7
- name: Model Card
 
1
  - dataset:
2
  id: TIGER-Lab/MMLU-Pro
3
+ task_id: mmlu_pro
4
  value: 84.3
5
+ date: '2026-01-29'
6
  source:
7
  url: https://huggingface.co/zai-org/GLM-4.7
8
+ name: Model Card