Commit
ยท
02afdc0
1
Parent(s):
dd99b7d
Add : README english ver
Browse files- README.md +7 -7
- README_en.md +149 -0
README.md
CHANGED
|
@@ -19,12 +19,12 @@ tags:
|
|
| 19 |
|
| 20 |
**"ํ๊ตญ ์์ด์ ํธ ๋ฒค์น๋งํฌ ํ๋ก์ ํธ"**
|
| 21 |
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
์ด๋ฌํ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด, ํ๊ตญ ์ค์ฌ์ฉ ํ๊ฒฝ์ ํนํ๋ ๊ณ ํ์ง ์์ด์ ํธ ๋ฒค์น๋งํฌ๋ฅผ ๊ฐ๋ฐํ์์ต๋๋ค.
|
| 26 |
|
| 27 |
-
|
| 28 |
**1. ๋จ๊ณ๋ณ ํ์คํฌ ์ค๊ณ**
|
| 29 |
๋จ์ ๋๊ตฌ ํธ์ถ๋ถํฐ ์ฅ๊ธฐ์ ๋งฅ๋ฝ ๋ฅ๋ ฅ, ๊ฐ๊ฑด์ฑ ์ฒ๋ฆฌ ๋ฅ๋ ฅ๊น์ง ์์ด์ ํธ์ ๋ฅ๋ ฅ์ 7๋จ๊ณ๋ก ์
์ฒด์ ์ผ๋ก ๋ถ์ํ์์ต๋๋ค.
|
| 30 |
|
|
@@ -52,12 +52,12 @@ dataset = load_dataset("Hugging-Face-KREW/Ko-AgentBench", data_files="L1.json")
|
|
| 52 |
dataset = load_dataset("Hugging-Face-KREW/Ko-AgentBench", data_files="*.json")
|
| 53 |
```
|
| 54 |
|
| 55 |
-
# Overview
|
| 56 |
|
| 57 |
- ์์ด์ ํธ ๋ฒค์น๋งํฌ ์ค๊ณ๋ฅผ ์ํ ํ์คํฌ ๋ถ๋ฅ ์ฒด๊ณ ์ ์
|
| 58 |
- ์์ด์ ํธ์ Tool calling ํ์ฉํ๋ ๊ณผ์ ์์ ํ์ํ ๋ฅ๋ ฅ์ ๋จ๊ณ์ ์ผ๋ก ๋ถ๋ฆฌํ์ฌ ํ๊ฐํ ์ ์๋๋ก ์ค๊ณ
|
| 59 |
|
| 60 |
-
|
| 61 |
|
| 62 |
- ํ๊ฐ ๋์ : Open-weight sLLM(*supports tool calling), Commercial APIs
|
| 63 |
- ํ๊ฐ ๋ฒ์ : ํ๊ฐ ์์ญ : ๋จ์ผํด(single-turn) ๋ฐ ๋ฉํฐํด(multi-turn) ๋ํ ์ํฉ์์ Agent๋ก์จ Tool calling ์ํ ๋ฅ๋ ฅ
|
|
@@ -134,7 +134,7 @@ dataset = load_dataset("Hugging-Face-KREW/Ko-AgentBench", data_files="*.json")
|
|
| 134 |
|
| 135 |
|
| 136 |
|
| 137 |
-
|
| 138 |
Ko-AgentBench์ ๋ํ ๋ ์์ธํ ๋ด์ฉ์ ํ์ธ ํ์ค ์ ์์ต๋๋ค.
|
| 139 |
- ๐ [Live Leaderboard](https://huggingface.co/spaces/huggingface-KREW/Ko-AgentBench)
|
| 140 |
- ๐ [Dataset](https://huggingface.co/datasets/huggingface-KREW/Ko-AgentBench)
|
|
|
|
| 19 |
|
| 20 |
**"ํ๊ตญ ์์ด์ ํธ ๋ฒค์น๋งํฌ ํ๋ก์ ํธ"**
|
| 21 |
|
| 22 |
+
**[English](README_en.md) | ํ๊ตญ์ด**
|
| 23 |
+
|
| 24 |
+
AI ์์ด์ ํธ์ ๋ฅ๋ ฅ์ด ๊ณ ๋ํ๋๋ฉด์, ๊ทธ ์ฑ๋ฅ์ ์ค์ ํ๊ฒฝ๊ณผ ์ ์ฌํ ์กฐ๊ฑด์์ ์ ๋ฐํ๊ฒ ์ธก์ ํ๋ ๊ฒ์ด ์ค์ํด์ก์ต๋๋ค. ํ์ง๋ง ๋๋ถ๋ถ์ ๋ฒค์น๋งํฌ๋ ์์ด๊ถ ํ๊ฒฝ์ ๊ธฐ์ค์ผ๋ก ์ค๊ณ๋์ด, ํ๊ตญ์ ํน์ํ ์ฌ์ฉ ๋งฅ๋ฝ์ ๋ฐ์ํ๋ ๋ฐ ํ๊ณ๊ฐ ์์์ต๋๋ค.
|
| 25 |
์ด๋ฌํ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด, ํ๊ตญ ์ค์ฌ์ฉ ํ๊ฒฝ์ ํนํ๋ ๊ณ ํ์ง ์์ด์ ํธ ๋ฒค์น๋งํฌ๋ฅผ ๊ฐ๋ฐํ์์ต๋๋ค.
|
| 26 |
|
| 27 |
+
# Key Features โจ
|
| 28 |
**1. ๋จ๊ณ๋ณ ํ์คํฌ ์ค๊ณ**
|
| 29 |
๋จ์ ๋๊ตฌ ํธ์ถ๋ถํฐ ์ฅ๊ธฐ์ ๋งฅ๋ฝ ๋ฅ๋ ฅ, ๊ฐ๊ฑด์ฑ ์ฒ๋ฆฌ ๋ฅ๋ ฅ๊น์ง ์์ด์ ํธ์ ๋ฅ๋ ฅ์ 7๋จ๊ณ๋ก ์
์ฒด์ ์ผ๋ก ๋ถ์ํ์์ต๋๋ค.
|
| 30 |
|
|
|
|
| 52 |
dataset = load_dataset("Hugging-Face-KREW/Ko-AgentBench", data_files="*.json")
|
| 53 |
```
|
| 54 |
|
| 55 |
+
# Overview
|
| 56 |
|
| 57 |
- ์์ด์ ํธ ๋ฒค์น๋งํฌ ์ค๊ณ๋ฅผ ์ํ ํ์คํฌ ๋ถ๋ฅ ์ฒด๊ณ ์ ์
|
| 58 |
- ์์ด์ ํธ์ Tool calling ํ์ฉํ๋ ๊ณผ์ ์์ ํ์ํ ๋ฅ๋ ฅ์ ๋จ๊ณ์ ์ผ๋ก ๋ถ๋ฆฌํ์ฌ ํ๊ฐํ ์ ์๋๋ก ์ค๊ณ
|
| 59 |
|
| 60 |
+
## ๋ฒ์
|
| 61 |
|
| 62 |
- ํ๊ฐ ๋์ : Open-weight sLLM(*supports tool calling), Commercial APIs
|
| 63 |
- ํ๊ฐ ๋ฒ์ : ํ๊ฐ ์์ญ : ๋จ์ผํด(single-turn) ๋ฐ ๋ฉํฐํด(multi-turn) ๋ํ ์ํฉ์์ Agent๋ก์จ Tool calling ์ํ ๋ฅ๋ ฅ
|
|
|
|
| 134 |
|
| 135 |
|
| 136 |
|
| 137 |
+
## Links
|
| 138 |
Ko-AgentBench์ ๋ํ ๋ ์์ธํ ๋ด์ฉ์ ํ์ธ ํ์ค ์ ์์ต๋๋ค.
|
| 139 |
- ๐ [Live Leaderboard](https://huggingface.co/spaces/huggingface-KREW/Ko-AgentBench)
|
| 140 |
- ๐ [Dataset](https://huggingface.co/datasets/huggingface-KREW/Ko-AgentBench)
|
README_en.md
ADDED
|
@@ -0,0 +1,149 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
license: apache-2.0
|
| 5 |
+
task_categories:
|
| 6 |
+
- question-answering
|
| 7 |
+
tags:
|
| 8 |
+
- agent
|
| 9 |
+
- benchmark
|
| 10 |
+
- tool-use
|
| 11 |
+
- korean
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
<p align="center">
|
| 15 |
+
<img src="banner.png" />
|
| 16 |
+
</p>
|
| 17 |
+
|
| 18 |
+
# **๐ฐ๐ท Ko-AgentBench v1**
|
| 19 |
+
|
| 20 |
+
**"Korean Agent Benchmark Project"**
|
| 21 |
+
|
| 22 |
+
**English | [ํ๊ตญ์ด](README.md)**
|
| 23 |
+
|
| 24 |
+
As AI agents become more sophisticated, it has become crucial to precisely measure their performance under conditions similar to real-world environments. However, most benchmarks are designed based on English-speaking environments, which limits their ability to reflect Korea's unique usage contexts.
|
| 25 |
+
|
| 26 |
+
To address this issue, we have developed a high-quality agent benchmark specialized for the Korean real-world usage environment.
|
| 27 |
+
|
| 28 |
+
# Key Features โจ
|
| 29 |
+
**1. Step-by-step Task Design**
|
| 30 |
+
We have comprehensively analyzed agent capabilities across 7 levels, from simple tool calls to long-term contextual abilities and robustness handling capabilities.
|
| 31 |
+
|
| 32 |
+
**2. 18 Korean-specific APIs and High-quality Scenarios Tailored to Real-life Environments**
|
| 33 |
+
Based on APIs from Korean real-world usage environments such as Naver, Maps, Kakao, and websites, we have implemented realistic problem-solving scenarios closely related to domestic users' daily lives, such as 'appointment booking' and 'blog review search'.
|
| 34 |
+
|
| 35 |
+
**3. Cache-based Iterative Evaluation and Robustness Testing**
|
| 36 |
+
We solve chronic problems of existing benchmarks, such as 'information attribute inconsistency changes'.
|
| 37 |
+
By improving failed API responses, we ensure benchmark consistency and reliability.
|
| 38 |
+
|
| 39 |
+
By evaluating error recognition/response capabilities (strategies) in intentional error situations, we select models that operate stably even in real-world environments.
|
| 40 |
+
|
| 41 |
+
**4. Step-specific Precision Metrics**
|
| 42 |
+
We evaluate the necessity/requirements of problem-solving step by step, including tool selection, parameter configuration, and data flow. Through this, we quantitatively identify the strengths and weaknesses of models.
|
| 43 |
+
|
| 44 |
+
## **Data Loading**
|
| 45 |
+
|
| 46 |
+
```bash
|
| 47 |
+
from datasets import load_dataset
|
| 48 |
+
|
| 49 |
+
# Load specific level
|
| 50 |
+
dataset = load_dataset("Hugging-Face-KREW/Ko-AgentBench", data_files="L1.json")
|
| 51 |
+
|
| 52 |
+
# Or load all levels
|
| 53 |
+
dataset = load_dataset("Hugging-Face-KREW/Ko-AgentBench", data_files="*.json")
|
| 54 |
+
```
|
| 55 |
+
|
| 56 |
+
# Overview
|
| 57 |
+
|
| 58 |
+
- Define task classification system for agent benchmark design
|
| 59 |
+
- Design to evaluate agent's tool calling capabilities in a step-by-step manner
|
| 60 |
+
|
| 61 |
+
## Scope
|
| 62 |
+
|
| 63 |
+
- Evaluation Target: Open-weight sLLM (supports tool calling), Commercial APIs
|
| 64 |
+
- Evaluation Scope: Agent tool calling performance in single-turn and multi-turn conversation situations
|
| 65 |
+
- Applied APIs: 18 Korean-specific open APIs
|
| 66 |
+
|
| 67 |
+
|
| 68 |
+
# Task Levels
|
| 69 |
+
|
| 70 |
+
## Single-Turn
|
| 71 |
+
|
| 72 |
+
**L1. Single Tool Call**
|
| 73 |
+
- Goal: Verify the most basic API calling capability
|
| 74 |
+
- Description: Check if the given tool can be executed with correct parameters
|
| 75 |
+
- Feature: Evaluate "accuracy only" by performing requests with specified API names or natural language requests as-is
|
| 76 |
+
- Example: "Search for 'Rapid Current' using Naver Book API and tell me the price."
|
| 77 |
+
- Example: "Tell me the price of the 'Rapid Current' book"
|
| 78 |
+
|
| 79 |
+
**L2. Tool Selection**
|
| 80 |
+
- Goal: Verify the ability to select the optimal API among multiple candidate tools
|
| 81 |
+
- Description: Users make requests in natural language, and the model must select the most suitable tool from the given tool list
|
| 82 |
+
- Feature: Evaluate accurate tool mapping with input natural language
|
| 83 |
+
- Example: "Check the price of the 'All Back English Middle 2-1 Cheonjae (Kim)' book."
|
| 84 |
+
- Candidate tools: `hotel_booking_api`, `aladin_books_api`
|
| 85 |
+
- Candidate tools must have no mutual correlation.
|
| 86 |
+
|
| 87 |
+
**L3. Sequential Tool Reasoning**
|
| 88 |
+
- Goal: Verify planning and execution capabilities through multi-step reasoning
|
| 89 |
+
- Description: Check if a correct pipeline can be constructed by connecting the results of one tool as input to another tool
|
| 90 |
+
- Feature: Evaluate "planned chain-of-tools" rather than simple calls
|
| 91 |
+
- Example: "Tell me when the Instax11 I bought from 11st Amazon will be delivered"
|
| 92 |
+
- Candidate tools: `11st_order_api`, `customs_api`, `cj_delivery_api`
|
| 93 |
+
- Tools must be callable sequentially (11st delivery number inquiry โ customs clearance โ courier company)
|
| 94 |
+
|
| 95 |
+
**L4. Parallel Tool Reasoning**
|
| 96 |
+
- Goal: Collect information in parallel and derive conclusions by synthesizing it
|
| 97 |
+
- Description: Simultaneously call multiple independent tools, compare and analyze results, then produce final answers
|
| 98 |
+
- Feature: Evaluate multi-source aggregation (information synthesis and comparison ability)
|
| 99 |
+
- Example: "Check the stock of the 'Hanroro Grapefruit Apricot Club' book."
|
| 100 |
+
- Candidate tools: `kyobo_books_api`, `aladin_books_api`
|
| 101 |
+
- Expected answer: There are 12 books at Kyobo Book Centre and 18 books at Aladin, totaling 30 books.
|
| 102 |
+
- At this time, candidate tools must handle the same function in parallel.
|
| 103 |
+
|
| 104 |
+
**L5. Error Handling and Robustness**
|
| 105 |
+
- Goal: Verify coping ability in error situations
|
| 106 |
+
- Description: Evaluate how various failure modes are handled, not just "failed"
|
| 107 |
+
- **Sub-items:**
|
| 108 |
+
- A. Request for additional questions
|
| 109 |
+
- Guide users to make clearer requests when information is insufficient
|
| 110 |
+
- B. Hallucination prevention
|
| 111 |
+
- Prohibit calling non-existent APIs
|
| 112 |
+
- Prohibit "pretending to succeed" answers when failed
|
| 113 |
+
- C. Fallback maneuvers
|
| 114 |
+
- Whether alternative APIs with the same function can be utilized when specific API errors occur
|
| 115 |
+
- Example: "When Naver Movie API call fails โ Report 'API call failed' or call Kakao Movie API as alternative"
|
| 116 |
+
|
| 117 |
+
## Multi-Turn
|
| 118 |
+
|
| 119 |
+
**L6. Efficient Tool Utilization**
|
| 120 |
+
- Goal: Verify the ability to efficiently reuse previous tool results
|
| 121 |
+
- Description: While recalling APIs in all situations is accurate, it's inefficient in terms of cost and delay. Conversely, unconditionally reusing old information also causes accuracy problems.
|
| 122 |
+
- Feature: Evaluate whether reasonable choices can be made between "recall vs reuse"
|
| 123 |
+
- Example:
|
| 124 |
+
- User: "Compare Coupang and Naver prices." โ Result: Coupang 80, Naver 85
|
| 125 |
+
- User: "What was the Naver price?"
|
| 126 |
+
- Correct answer: 85 (utilize past information, avoid unnecessary recalls)
|
| 127 |
+
- Wrong answer: Call API again or "I don't know"
|
| 128 |
+
|
| 129 |
+
**L7. Long-Context Reasoning**
|
| 130 |
+
- Goal: Verify the ability to maintain long-term context in multi-turn conversations
|
| 131 |
+
- Description: Remember information from several turns ago and correctly perform tool calling by connecting it with new questions
|
| 132 |
+
- Example:
|
| 133 |
+
- User's first question: "I'm going to travel to Jeju Island."
|
| 134 |
+
- Later: "How's the weather?" โ Call weather API using Jeju Island context
|
| 135 |
+
- (Additional turn) "If it rains, find places where I can buy an umbrella." โ Utilize all previous Jeju Island + weather context
|
| 136 |
+
|
| 137 |
+
## Links
|
| 138 |
+
You can check more detailed information about Ko-AgentBench.
|
| 139 |
+
- ๐ [Live Leaderboard](https://huggingface.co/spaces/huggingface-KREW/Ko-AgentBench)
|
| 140 |
+
- ๐ [Dataset](https://huggingface.co/datasets/huggingface-KREW/Ko-AgentBench)
|
| 141 |
+
- ๐ [Github](https://github.com/Hugging-Face-KREW/Ko-AgentBench)
|
| 142 |
+
|
| 143 |
+
## Contact
|
| 144 |
+
If you have any questions about the dataset and benchmark, please contact us!
|
| 145 |
+
|
| 146 |
+
Hugging Face KREW is a Korean non-profit research organization that strives to deeply understand artificial intelligence through Hugging Face and contribute to open source.
|
| 147 |
+
- โ๐ป Blog: [KREW-blog](https://hugging-face-krew.github.io/)
|
| 148 |
+
- ๐ฆ HuggingFace Community: [@huggingface-KREW](https://huggingface.co/huggingface-KREW)
|
| 149 |
+
- ๐ผ LinkedIn: [Hugging Face KREW](https://www.linkedin.com/company/hugging-face-krew/)
|