tmpyxthtyno

This model is a fine-tuned version of xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2418
  • F1: 0.9725
  • Precision: 0.9690
  • Recall: 0.9760
lang f1 precision recall prevalence n
cy 1.000000 1.000000 1.000000 0.812500 16
hu 1.000000 1.000000 1.000000 0.529412 17
sv 0.987552 0.975410 1.000000 0.868613 137
no 0.980392 1.000000 0.961538 0.666667 39
de 0.977853 0.972881 0.982877 0.630670 463
da 0.974619 0.989691 0.960000 0.800000 125
en 0.974138 0.974138 0.974138 0.758170 153
it 0.971429 0.980769 0.962264 0.670886 79
fr 0.971429 0.944444 1.000000 0.566667 30
es 0.967742 0.937500 1.000000 0.714286 21
nl 0.966507 0.961905 0.971154 0.675325 154
fi 0.960000 0.923077 1.000000 0.750000 16
pl 0.936170 0.956522 0.916667 0.592593 81
cs 0.933333 0.875000 1.000000 0.411765 17
pt 0.833333 0.714286 1.000000 0.454545 22

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 16
  • seed: 1234
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss F1 Precision Recall
0.2087 0.9989 437 0.1219 0.9751 0.9806 0.9696
0.1227 2.0 875 0.1591 0.9724 0.9868 0.9584
0.0943 2.9989 1312 0.1554 0.9744 0.9759 0.9728
0.0465 4.0 1750 0.2055 0.9732 0.9868 0.96
0.0339 4.9943 2185 0.1788 0.9759 0.9791 0.9728

Framework versions

  • Transformers 4.45.1
  • Pytorch 2.8.0+cu126
  • Datasets 4.0.0
  • Tokenizers 0.20.3
Downloads last month
-
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for luissattelmayer/policy-politics

Finetuned
(3767)
this model