uzlib / README.md
murodbek's picture
fixing blogpost linki
96bc722 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: question
      dtype: string
    - name: option_a
      dtype: string
    - name: option_b
      dtype: string
    - name: option_c
      dtype: string
    - name: option_d
      dtype: string
    - name: answer
      dtype: string
    - name: type
      dtype: string
  splits:
    - name: correct_word
      num_bytes: 250887
      num_examples: 1501
    - name: meaning
      num_bytes: 49277
      num_examples: 236
    - name: meaning_in_context
      num_bytes: 20488
      num_examples: 72
    - name: fill_in
      num_bytes: 10202
      num_examples: 52
  download_size: 143580
  dataset_size: 330854
configs:
  - config_name: default
    data_files:
      - split: correct_word
        path: data/correct_word-*
      - split: meaning
        path: data/meaning-*
      - split: meaning_in_context
        path: data/meaning_in_context-*
      - split: fill_in
        path: data/fill_in-*
license: mit
task_categories:
  - question-answering
language:
  - uz
tags:
  - uzbek
  - linguistics
pretty_name: uzlib
size_categories:
  - 1K<n<10K

Uzbek Linguistic Benchmark (UzLiB)

Table of Contents

Dataset Description

Dataset Description

UzLiB (Uzbek Linguistic Benchmark) is the first comprehensive multiple-choice question benchmark designed to evaluate the linguistic understanding and capabilities of Large Language Models (LLMs) in the Uzbek language. It assesses how well models grasp correct Uzbek forms, usage, meanings, and contextual nuances.

For more detailed background on the motivation, creation process, and initial findings, please refer to our blog post (in Uzbek). You can find the evaluation scripts and leaderboard at the GitHub repository.

How to Use

To load and use the dataset:

from datasets import load_dataset

uzlib = load_dataset("tahrirchi/uzlib")
uzlib

Dataset Structure

The dataset consists of multiple-choice questions, each with four options and a single correct answer.

Example Data Point:

{
  "id": "CW1242",
  "question": "Berilgan variantlar orasida qaysi biri to‘g‘ri yozilgan?",
  "option_a": "Samolyod",
  "option_b": "Samalyot",
  "option_c": "Samalyod",
  "option_d": "Samolyot",
  "answer": "D",
  "type": "correct_word"
}

Data Fields

  • id (string): Unique identifier for the question.
  • question (string): The text of the question.
  • option_a (string): Answer option A.
  • option_b (string): Answer option B.
  • option_c (string): Answer option C.
  • option_d (string): Answer option D.
  • answer (string): The correct option label (A, B, C, or D).
  • type (string): Category of the question. One of:
    • correct_word: Correct spelling or word form.
    • meaning: Definition of words or phrases.
    • meaning_in_context: Word usage in specific contexts.
    • fill_in: Filling blanks in sentences.

Data Splits / Configurations

The benchmark contains 1861 questions, categorized as follows:

Category / Split Name Number of Examples
correct_word 1501
meaning 236
meaning_in_context 72
fill_in 52
Total 1861

Dataset Creation

The questions were sourced from quizzes administered on popular Telegram channels dedicated to Uzbek language expertise:

Curation Process:

  1. Collection: Gathering quizzes from the specified Telegram channels.
  2. Verification: Manually identifying and confirming the correct answer for each quiz question, as this is not directly provided by the Telegram quiz export.
  3. Filtering: Removing duplicate or unsuitable questions.
  4. Categorization: Assigning each question to one of the four types (correct_word, meaning, meaning_in_context, and fill_in).
  5. Standardization: Ensuring every question has exactly four multiple-choice options (A, B, C, D). This involved manually creating distractor options for questions originally having fewer choices, standardizing the random guess probability to 25%.
  6. Transliteration: Converting all text to the Uzbek Latin script.
  7. Shuffling: Randomizing the order of answer options (A, B, C, D) for each question.

Citation Information

If you use UzLiB in your research or application, please cite it as follows:

@misc{Shopulatov2025UzLiB,
      title={{UzLiB: A Benchmark for Evaluating LLMs on Uzbek Linguistics}},
      author={Abror Shopulatov},
      year={2025},
      howpublished={\url{https://huggingface.co/datasets/tahrirchi/uzlib}},
      note={Accessed: YYYY-MM-DD} % Please update with the date you accessed the dataset
}

Contact

For inquiries regarding the dataset, please contact a.shopulatov@tilmoch.ai. For issues related to the evaluation code, please refer to the GitHub repository.