Datasets:

Languages:
Hindi
ArXiv:
License:
ravirajoshi commited on
Commit
9b58b3c
·
verified ·
1 Parent(s): 20726ee

Upload BFCL-EVAL.md

Browse files
Files changed (1) hide show
  1. BFCL-EVAL.md +118 -0
BFCL-EVAL.md ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # BFCL-Hi Evaluation
2
+
3
+ ## Overview
4
+
5
+ BFCL-Hi (Berkeley Function-Calling Leaderboard - Hindi) is a Hindi adaptation of the BFCL benchmark, designed to evaluate the function-calling capabilities of Large Language Models in Hindi. This benchmark assesses models' ability to understand function descriptions and generate appropriate function calls based on natural language instructions.
6
+
7
+ ## Evaluation Workflow
8
+
9
+ BFCL-Hi follows the **BFCL v2 evaluation methodology** from the original GitHub repository, utilizing the same framework for assessing function-calling capabilities.
10
+
11
+ ### Evaluation Steps
12
+
13
+ 1. **Load the dataset** (see important note below about dataset loading)
14
+ 2. **Generate model responses** with function calls based on the prompts
15
+ 3. **Evaluate function-calling accuracy** using the BFCL v2 evaluation scripts
16
+ 4. **Obtain metrics** including execution accuracy, structural correctness, and other BFCL metrics
17
+
18
+ ## Important: Dataset Loading
19
+
20
+ ⚠️ **DO NOT use the HuggingFace `load_dataset` method** to load the BFCL-Hi dataset.
21
+
22
+ The dataset files are hosted on HuggingFace but are **not compatible** with the HuggingFace datasets package. This is consistent with the English version of the dataset.
23
+
24
+ **Recommended Approach:**
25
+ - Download the JSON files directly from the HuggingFace repository
26
+ - Load them manually using standard JSON loading methods
27
+ - Follow the BFCL v2 repository's data loading methodology
28
+
29
+ ## Implementation
30
+
31
+ Please follow the **same methodology as BFCL v2 (English)** as documented in the official resources below.
32
+
33
+ ## Setup and Usage
34
+
35
+ ### Step 1: Installation
36
+
37
+ Clone the Gorilla repository and install dependencies:
38
+
39
+ ```bash
40
+ git clone https://github.com/ShishirPatil/gorilla.git
41
+ cd gorilla/berkeley-function-call-leaderboard
42
+ pip install -r requirements.txt
43
+ ```
44
+
45
+ ### Step 2: Prepare Your Dataset
46
+
47
+ - Place your dataset files in the appropriate directory
48
+ - Follow the data format specifications from the English BFCL v2
49
+
50
+ ### Step 3: Generate Model Responses
51
+
52
+ Run inference to generate function calls from your model:
53
+
54
+ ```bash
55
+ python openfunctions_evaluation.py \
56
+ --model <model_name> \
57
+ --test-category <category> \
58
+ --num-gpus <num_gpus>
59
+ ```
60
+
61
+ **Key Parameters:**
62
+ - `--model`: Your model name or path
63
+ - `--test-category`: Category to test (e.g., `all`, `simple`, `multiple`, `parallel`, etc.)
64
+ - `--num-gpus`: Number of GPUs to use
65
+
66
+ **For Hindi (BFCL-Hi):**
67
+ - Ensure you load the Hindi version of the dataset
68
+ - Modify the inference code according to your model and hosted inference framework
69
+
70
+ **Available Test Categories in BFCL-Hi:**
71
+ - `simple`: Single function calls
72
+ - `multiple`: Multiple function calls
73
+ - `parallel`: Parallel function calls
74
+ - `parallel_multiple`: Combination of parallel and multiple function calls
75
+ - `relevance`: Testing function relevance detection
76
+ - `irrelevance`: Testing irrelevant function call handling
77
+
78
+ ### Step 4: Evaluate Results
79
+
80
+ Evaluate the generated function calls against ground truth:
81
+
82
+ ```bash
83
+ python eval_runner.py \
84
+ --model <model_name> \
85
+ --test-category <category>
86
+ ```
87
+
88
+ This will:
89
+ - Parse the generated function calls
90
+ - Compare with ground truth
91
+ - Calculate accuracy metrics
92
+ - Generate detailed error analysis
93
+
94
+ ### Step 5: View Results
95
+
96
+ Results will be saved in the output directory with metrics including:
97
+ - **Execution Accuracy**: Whether the function call executes correctly
98
+ - **Structural Correctness**: Whether the function call structure is valid
99
+ - **Argument Accuracy**: Whether arguments are correctly formatted
100
+ - **Overall Score**: Aggregated performance metric
101
+
102
+ You can also create custome Evaluation Script based on the above for more control over the evaluation process.
103
+
104
+ ### Official BFCL v2 Resources
105
+
106
+ - **GitHub Repository**: [Berkeley Function-Calling Leaderboard](https://github.com/ShishirPatil/gorilla/tree/main/berkeley-function-call-leaderboard)
107
+ - Complete evaluation framework and scripts
108
+ - Dataset loading instructions
109
+ - Evaluation metrics implementation
110
+
111
+ - **BFCL v2 Documentation**: [BFCL v2 Release](https://gorilla.cs.berkeley.edu/blogs/8_berkeley_function_calling_leaderboard.html)
112
+ - Overview of v2 improvements and methodology
113
+
114
+ - **Gorilla Project**: [https://gorilla.cs.berkeley.edu/](https://gorilla.cs.berkeley.edu/)
115
+ - Main project page with additional resources
116
+
117
+ - **Research Paper**: [Gorilla: Large Language Model Connected with Massive APIs](https://arxiv.org/abs/2305.15334)
118
+ - Patil et al., arXiv:2305.15334 (2023)