michaelfeil commited on
Commit
30f5660
·
verified ·
1 Parent(s): 760fc4e

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 4096,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": true,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,231 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - Qwen/Qwen3-8B-Base
5
+ tags:
6
+ - transformers
7
+ - sentence-transformers
8
+ - sentence-similarity
9
+ - feature-extraction
10
+ ---
11
+ # Qwen3-Embedding-8B
12
+
13
+ <p align="center">
14
+ <img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/logo_qwen3.png" width="400"/>
15
+ <p>
16
+
17
+ ## Highlights
18
+
19
+ The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B). This series inherits the exceptional multilingual capabilities, long-text understanding, and reasoning skills of its foundational model. The Qwen3 Embedding series represents significant advancements in multiple text embedding and ranking tasks, including text retrieval, code retrieval, text classification, text clustering, and bitext mining.
20
+
21
+ **Exceptional Versatility**: The embedding model has achieved state-of-the-art performance across a wide range of downstream application evaluations. The 8B size embedding model ranks **No.1** in the MTEB multilingual leaderboard (as of June 5, 2025, score **70.58**), while the reranking model excels in various text retrieval scenarios.
22
+
23
+ **Comprehensive Flexibility**: The Qwen3 Embedding series offers a full spectrum of sizes (from 0.6B to 8B) for both embedding and reranking models, catering to diverse use cases that prioritize efficiency and effectiveness. Developers can seamlessly combine these two modules. Additionally, the embedding model allows for flexible vector definitions across all dimensions, and both embedding and reranking models support user-defined instructions to enhance performance for specific tasks, languages, or scenarios.
24
+
25
+ **Multilingual Capability**: The Qwen3 Embedding series offer support for over 100 languages, thanks to the multilingual capabilites of Qwen3 models. This includes various programming languages, and provides robust multilingual, cross-lingual, and code retrieval capabilities.
26
+
27
+ **Qwen3-Embedding-8B** has the following features:
28
+
29
+ - Model Type: Text Embedding
30
+ - Supported Languages: 100+ Languages
31
+ - Number of Paramaters: 8B
32
+ - Context Length: 32k
33
+ - Embedding Dimension: Up to 4096, supports user-defined output dimensions ranging from 32 to 4096
34
+
35
+ For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3-embedding/), [GitHub](https://github.com/QwenLM/Qwen3-Embedding).
36
+
37
+ ## Qwen3 Embedding Series Model list
38
+
39
+ | Model Type | Models | Size | Layers | Sequence Length | Embedding Dimension | MRL Support | Instruction Aware |
40
+ |------------------|----------------------|------|--------|-----------------|---------------------|-------------|----------------|
41
+ | Text Embedding | [Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) | 0.6B | 28 | 32K | 1024 | Yes | Yes |
42
+ | Text Embedding | [Qwen3-Embedding-4B](https://huggingface.co/Qwen/Qwen3-Embedding-4B) | 4B | 36 | 32K | 2560 | Yes | Yes |
43
+ | Text Embedding | [Qwen3-Embedding-8B](https://huggingface.co/Qwen/Qwen3-Embedding-8B) | 8B | 36 | 32K | 4096 | Yes | Yes |
44
+ | Text Reranking | [Qwen3-Reranker-0.6B](https://huggingface.co/Qwen/Qwen3-Reranker-0.6B) | 0.6B | 28 | 32K | - | - | Yes |
45
+ | Text Reranking | [Qwen3-Reranker-4B](https://huggingface.co/Qwen/Qwen3-Reranker-4B) | 4B | 36 | 32K | - | - | Yes |
46
+ | Text Reranking | [Qwen3-Reranker-8B](https://huggingface.co/Qwen/Qwen3-Reranker-8B) | 8B | 36 | 32K | - | - | Yes |
47
+
48
+ > **Note**:
49
+ > - `MRL Support` indicates whether the embedding model supports custom dimensions for the final embedding.
50
+ > - `Instruction Aware` notes whether the embedding or reranking model supports customizing the input instruction according to different tasks.
51
+ > - Our evaluation indicates that, for most downstream tasks, using instructions (instruct) typically yields an improvement of 1% to 5% compared to not using them. Therefore, we recommend that developers create tailored instructions specific to their tasks and scenarios. In multilingual contexts, we also advise users to write their instructions in English, as most instructions utilized during the model training process were originally written in English.
52
+
53
+ ## Usage
54
+
55
+ With Transformers versions earlier than 4.51.0, you may encounter the following error:
56
+ ```
57
+ KeyError: 'qwen3'
58
+ ```
59
+
60
+ ### Sentence Transformers Usage
61
+
62
+ ```python
63
+ # Requires transformers>=4.51.0
64
+
65
+ from sentence_transformers import SentenceTransformer
66
+
67
+ # Load the model
68
+ model = SentenceTransformer("Qwen/Qwen3-Embedding-8B")
69
+
70
+ # We recommend enabling flash_attention_2 for better acceleration and memory saving,
71
+ # together with setting `padding_side` to "left":
72
+ # model = SentenceTransformer(
73
+ # "Qwen/Qwen3-Embedding-8B",
74
+ # model_kwargs={"attn_implementation": "flash_attention_2", "device_map": "auto"},
75
+ # tokenizer_kwargs={"padding_side": "left"},
76
+ # )
77
+
78
+ # The queries and documents to embed
79
+ queries = [
80
+ "What is the capital of China?",
81
+ "Explain gravity",
82
+ ]
83
+ documents = [
84
+ "The capital of China is Beijing.",
85
+ "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
86
+ ]
87
+
88
+ # Encode the queries and documents. Note that queries benefit from using a prompt
89
+ # Here we use the prompt called "query" stored under `model.prompts`, but you can
90
+ # also pass your own prompt via the `prompt` argument
91
+ query_embeddings = model.encode(queries, prompt_name="query")
92
+ document_embeddings = model.encode(documents)
93
+
94
+ # Compute the (cosine) similarity between the query and document embeddings
95
+ similarity = model.similarity(query_embeddings, document_embeddings)
96
+ print(similarity)
97
+ # tensor([[0.7493, 0.0751],
98
+ # [0.0880, 0.6318]])
99
+ ```
100
+
101
+ ### Transformers Usage
102
+
103
+ ```python
104
+ # Requires transformers>=4.51.0
105
+
106
+ import torch
107
+ import torch.nn.functional as F
108
+
109
+ from torch import Tensor
110
+ from transformers import AutoTokenizer, AutoModel
111
+
112
+
113
+ def last_token_pool(last_hidden_states: Tensor,
114
+ attention_mask: Tensor) -> Tensor:
115
+ left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
116
+ if left_padding:
117
+ return last_hidden_states[:, -1]
118
+ else:
119
+ sequence_lengths = attention_mask.sum(dim=1) - 1
120
+ batch_size = last_hidden_states.shape[0]
121
+ return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
122
+
123
+
124
+ def get_detailed_instruct(task_description: str, query: str) -> str:
125
+ return f'Instruct: {task_description}\nQuery:{query}'
126
+
127
+ # Each query must come with a one-sentence instruction that describes the task
128
+ task = 'Given a web search query, retrieve relevant passages that answer the query'
129
+
130
+ queries = [
131
+ get_detailed_instruct(task, 'What is the capital of China?'),
132
+ get_detailed_instruct(task, 'Explain gravity')
133
+ ]
134
+ # No need to add instruction for retrieval documents
135
+ documents = [
136
+ "The capital of China is Beijing.",
137
+ "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun."
138
+ ]
139
+ input_texts = queries + documents
140
+
141
+ tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-Embedding-8B', padding_side='left')
142
+ model = AutoModel.from_pretrained('Qwen/Qwen3-Embedding-8B')
143
+
144
+ # We recommend enabling flash_attention_2 for better acceleration and memory saving.
145
+ # model = AutoModel.from_pretrained('Qwen/Qwen3-Embedding-8B', attn_implementation="flash_attention_2", torch_dtype=torch.float16).cuda()
146
+
147
+ max_length = 8192
148
+
149
+ # Tokenize the input texts
150
+ batch_dict = tokenizer(
151
+ input_texts,
152
+ padding=True,
153
+ truncation=True,
154
+ max_length=max_length,
155
+ return_tensors="pt",
156
+ )
157
+ batch_dict.to(model.device)
158
+ outputs = model(**batch_dict)
159
+ embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
160
+
161
+ # normalize embeddings
162
+ embeddings = F.normalize(embeddings, p=2, dim=1)
163
+ scores = (embeddings[:2] @ embeddings[2:].T)
164
+ print(scores.tolist())
165
+ # [[0.7493016123771667, 0.0750647559762001], [0.08795969933271408, 0.6318399906158447]]
166
+ ```
167
+ 📌 **Tip**: We recommend that developers customize the `instruct` according to their specific scenarios, tasks, and languages. Our tests have shown that in most retrieval scenarios, not using an `instruct` on the query side can lead to a drop in retrieval performance by approximately 1% to 5%.
168
+
169
+ ## Evaluation
170
+
171
+ ### MTEB (Multilingual)
172
+
173
+ | Model | Size | Mean (Task) | Mean (Type) | Bitxt Mining | Class. | Clust. | Inst. Retri. | Multi. Class. | Pair. Class. | Rerank | Retri. | STS |
174
+ |----------------------------------|:-------:|:-------------:|:-------------:|:--------------:|:--------:|:--------:|:--------------:|:---------------:|:--------------:|:--------:|:--------:|:------:|
175
+ | NV-Embed-v2 | 7B | 56.29 | 49.58 | 57.84 | 57.29 | 40.80 | 1.04 | 18.63 | 78.94 | 63.82 | 56.72 | 71.10|
176
+ | GritLM-7B | 7B | 60.92 | 53.74 | 70.53 | 61.83 | 49.75 | 3.45 | 22.77 | 79.94 | 63.78 | 58.31 | 73.33|
177
+ | BGE-M3 | 0.6B | 59.56 | 52.18 | 79.11 | 60.35 | 40.88 | -3.11 | 20.1 | 80.76 | 62.79 | 54.60 | 74.12|
178
+ | multilingual-e5-large-instruct | 0.6B | 63.22 | 55.08 | 80.13 | 64.94 | 50.75 | -0.40 | 22.91 | 80.86 | 62.61 | 57.12 | 76.81|
179
+ | gte-Qwen2-1.5B-instruct | 1.5B | 59.45 | 52.69 | 62.51 | 58.32 | 52.05 | 0.74 | 24.02 | 81.58 | 62.58 | 60.78 | 71.61|
180
+ | gte-Qwen2-7b-Instruct | 7B | 62.51 | 55.93 | 73.92 | 61.55 | 52.77 | 4.94 | 25.48 | 85.13 | 65.55 | 60.08 | 73.98|
181
+ | text-embedding-3-large | - | 58.93 | 51.41 | 62.17 | 60.27 | 46.89 | -2.68 | 22.03 | 79.17 | 63.89 | 59.27 | 71.68|
182
+ | Cohere-embed-multilingual-v3.0 | - | 61.12 | 53.23 | 70.50 | 62.95 | 46.89 | -1.89 | 22.74 | 79.88 | 64.07 | 59.16 | 74.80|
183
+ | gemini-embedding-exp-03-07 | - | 68.37 | 59.59 | 79.28 | 71.82 | 54.59 | 5.18 | **29.16** | 83.63 | 65.58 | 67.71 | 79.40|
184
+ | **Qwen3-Embedding-0.6B** | 0.6B | 64.33 | 56.00 | 72.22 | 66.83 | 52.33 | 5.09 | 24.59 | 80.83 | 61.41 | 64.64 | 76.17|
185
+ | **Qwen3-Embedding-4B** | 4B | 69.45 | 60.86 | 79.36 | 72.33 | 57.15 | **11.56** | 26.77 | 85.05 | 65.08 | 69.60 | 80.86|
186
+ | **Qwen3-Embedding-8B** | 8B | **70.58** | **61.69** | **80.89** | **74.00** | **57.65** | 10.06 | 28.66 | **86.40** | **65.63** | **70.88** | **81.08** |
187
+
188
+ > **Note**: For compared models, the scores are retrieved from MTEB online [leaderboard](https://huggingface.co/spaces/mteb/leaderboard) on May 24th, 2025.
189
+
190
+ ### MTEB (Eng v2)
191
+
192
+ | MTEB English / Models | Param. | Mean(Task) | Mean(Type) | Class. | Clust. | Pair Class. | Rerank. | Retri. | STS | Summ. |
193
+ |--------------------------------|:--------:|:------------:|:------------:|:--------:|:--------:|:-------------:|:---------:|:--------:|:-------:|:-------:|
194
+ | multilingual-e5-large-instruct | 0.6B | 65.53 | 61.21 | 75.54 | 49.89 | 86.24 | 48.74 | 53.47 | 84.72 | 29.89 |
195
+ | NV-Embed-v2 | 7.8B | 69.81 | 65.00 | 87.19 | 47.66 | 88.69 | 49.61 | 62.84 | 83.82 | 35.21 |
196
+ | GritLM-7B | 7.2B | 67.07 | 63.22 | 81.25 | 50.82 | 87.29 | 49.59 | 54.95 | 83.03 | 35.65 |
197
+ | gte-Qwen2-1.5B-instruct | 1.5B | 67.20 | 63.26 | 85.84 | 53.54 | 87.52 | 49.25 | 50.25 | 82.51 | 33.94 |
198
+ | stella_en_1.5B_v5 | 1.5B | 69.43 | 65.32 | 89.38 | 57.06 | 88.02 | 50.19 | 52.42 | 83.27 | 36.91 |
199
+ | gte-Qwen2-7B-instruct | 7.6B | 70.72 | 65.77 | 88.52 | 58.97 | 85.9 | 50.47 | 58.09 | 82.69 | 35.74 |
200
+ | gemini-embedding-exp-03-07 | - | 73.3 | 67.67 | 90.05 | **59.39** | **87.7** | 48.59 | 64.35 | 85.29 | **38.28** |
201
+ | **Qwen3-Embedding-0.6B** | 0.6B | 70.70 | 64.88 | 85.76 | 54.05 | 84.37 | 48.18 | 61.83 | 86.57 | 33.43 |
202
+ | **Qwen3-Embedding-4B** | 4B | 74.60 | 68.10 | 89.84 | 57.51 | 87.01 | 50.76 | 68.46 | **88.72** | 34.39 |
203
+ | **Qwen3-Embedding-8B** | 8B | **75.22** | **68.71** | **90.43** | 58.57 | 87.52 | **51.56** | **69.44** | 88.58 | 34.83 |
204
+
205
+ ### C-MTEB (MTEB Chinese)
206
+
207
+ | C-MTEB | Param. | Mean(Task) | Mean(Type) | Class. | Clust. | Pair Class. | Rerank. | Retr. | STS |
208
+ |------------------|--------|------------|------------|--------|--------|-------------|---------|-------|-------|
209
+ | multilingual-e5-large-instruct | 0.6B | 58.08 | 58.24 | 69.80 | 48.23 | 64.52 | 57.45 | 63.65 | 45.81 |
210
+ | bge-multilingual-gemma2 | 9B | 67.64 |68.52 | 75.31 | 59.30 | 86.67 | 68.28 | 73.73 | 55.19 |
211
+ | gte-Qwen2-1.5B-instruct | 1.5B | 67.12 | 67.79 | 72.53 | 54.61 | 79.5 | 68.21 | 71.86 | 60.05 |
212
+ | gte-Qwen2-7B-instruct | 7.6B | 71.62 | 72.19 | 75.77 | 66.06 | 81.16 | 69.24 | 75.70 | 65.20 |
213
+ | ritrieve_zh_v1 | 0.3B | 72.71 | 73.85 | 76.88 | 66.5 | **85.98** | **72.86** | 76.97 | **63.92** |
214
+ | **Qwen3-Embedding-0.6B** | 0.6B | 66.33 | 67.45 | 71.40 | 68.74 | 76.42 | 62.58 | 71.03 | 54.52 |
215
+ | **Qwen3-Embedding-4B** | 4B | 72.27 | 73.51 | 75.46 | 77.89 | 83.34 | 66.05 | 77.03 | 61.26 |
216
+ | **Qwen3-Embedding-8B** | 8B | **73.84** | **75.00** | **76.97** | **80.08** | 84.23 | 66.99 | **78.21** | 63.53 |
217
+
218
+
219
+ ## Citation
220
+
221
+ If you find our work helpful, feel free to give us a cite.
222
+
223
+ ```
224
+ @misc{qwen3-embedding,
225
+ title = {Qwen3-Embedding},
226
+ url = {https://qwenlm.github.io/blog/qwen3/},
227
+ author = {Qwen Team},
228
+ month = {May},
229
+ year = {2025}
230
+ }
231
+ ```
config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen3Model"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 151643,
8
+ "eos_token_id": 151645,
9
+ "head_dim": 128,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 4096,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 12288,
14
+ "max_position_embeddings": 40960,
15
+ "max_window_layers": 36,
16
+ "model_type": "qwen3",
17
+ "num_attention_heads": 32,
18
+ "num_hidden_layers": 36,
19
+ "num_key_value_heads": 8,
20
+ "rms_norm_eps": 1e-06,
21
+ "rope_scaling": null,
22
+ "rope_theta": 1000000,
23
+ "sliding_window": null,
24
+ "tie_word_embeddings": false,
25
+ "torch_dtype": "bfloat16",
26
+ "transformers_version": "4.52.4",
27
+ "use_cache": true,
28
+ "use_sliding_window": false,
29
+ "vocab_size": 151665
30
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "prompts": {
3
+ "query": "Instruct: Given a web search query, retrieve relevant passages that answer the query\nQuery:",
4
+ "document": ""
5
+ },
6
+ "default_prompt_name": null,
7
+ "similarity_fn_name": "cosine"
8
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "eos_token_id": 151643,
4
+ "max_new_tokens": 2048,
5
+ "transformers_version": "4.51.3"
6
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99b343597fe840706146144699a8b9188dd3387e43eb61faf0231b70b249d451
3
+ size 4900037024
model-00002-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dff635b0f6dbbaad2a2d633ef037ec0a39bc165cc1806c712fbd6fcbcb4526c0
3
+ size 4915959512
model-00003-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30b1d4c53d84eb018f642cad7b373f0aabf79699872d8702c1f38577c0a59a2f
3
+ size 4983067656
model-00004-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:36cbc9c60375693629f25743c1e77ebb1724af58e671b2376463193c7fd21ef6
3
+ size 335570376
model.safetensors.index.json ADDED
@@ -0,0 +1,405 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 15134590976
4
+ },
5
+ "weight_map": {
6
+ "embed_tokens.weight": "model-00001-of-00004.safetensors",
7
+ "layers.0.input_layernorm.weight": "model-00001-of-00004.safetensors",
8
+ "layers.0.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
9
+ "layers.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
10
+ "layers.0.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
11
+ "layers.0.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
12
+ "layers.0.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
13
+ "layers.0.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
14
+ "layers.0.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
15
+ "layers.0.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
16
+ "layers.0.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
17
+ "layers.0.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
18
+ "layers.1.input_layernorm.weight": "model-00001-of-00004.safetensors",
19
+ "layers.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
20
+ "layers.1.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
21
+ "layers.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
22
+ "layers.1.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
23
+ "layers.1.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
24
+ "layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
25
+ "layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
26
+ "layers.1.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
27
+ "layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
28
+ "layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
29
+ "layers.10.input_layernorm.weight": "model-00002-of-00004.safetensors",
30
+ "layers.10.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
31
+ "layers.10.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
32
+ "layers.10.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
33
+ "layers.10.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
34
+ "layers.10.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
35
+ "layers.10.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
36
+ "layers.10.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
37
+ "layers.10.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
38
+ "layers.10.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
39
+ "layers.10.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
40
+ "layers.11.input_layernorm.weight": "model-00002-of-00004.safetensors",
41
+ "layers.11.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
42
+ "layers.11.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
43
+ "layers.11.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
44
+ "layers.11.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
45
+ "layers.11.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
46
+ "layers.11.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
47
+ "layers.11.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
48
+ "layers.11.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
49
+ "layers.11.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
50
+ "layers.11.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
51
+ "layers.12.input_layernorm.weight": "model-00002-of-00004.safetensors",
52
+ "layers.12.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
53
+ "layers.12.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
54
+ "layers.12.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
55
+ "layers.12.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
56
+ "layers.12.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
57
+ "layers.12.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
58
+ "layers.12.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
59
+ "layers.12.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
60
+ "layers.12.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
61
+ "layers.12.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
62
+ "layers.13.input_layernorm.weight": "model-00002-of-00004.safetensors",
63
+ "layers.13.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
64
+ "layers.13.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
65
+ "layers.13.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
66
+ "layers.13.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
67
+ "layers.13.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
68
+ "layers.13.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
69
+ "layers.13.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
70
+ "layers.13.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
71
+ "layers.13.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
72
+ "layers.13.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
73
+ "layers.14.input_layernorm.weight": "model-00002-of-00004.safetensors",
74
+ "layers.14.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
75
+ "layers.14.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
76
+ "layers.14.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
77
+ "layers.14.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
78
+ "layers.14.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
79
+ "layers.14.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
80
+ "layers.14.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
81
+ "layers.14.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
82
+ "layers.14.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
83
+ "layers.14.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
84
+ "layers.15.input_layernorm.weight": "model-00002-of-00004.safetensors",
85
+ "layers.15.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
86
+ "layers.15.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
87
+ "layers.15.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
88
+ "layers.15.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
89
+ "layers.15.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
90
+ "layers.15.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
91
+ "layers.15.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
92
+ "layers.15.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
93
+ "layers.15.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
94
+ "layers.15.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
95
+ "layers.16.input_layernorm.weight": "model-00002-of-00004.safetensors",
96
+ "layers.16.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
97
+ "layers.16.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
98
+ "layers.16.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
99
+ "layers.16.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
100
+ "layers.16.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
101
+ "layers.16.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
102
+ "layers.16.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
103
+ "layers.16.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
104
+ "layers.16.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
105
+ "layers.16.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
106
+ "layers.17.input_layernorm.weight": "model-00002-of-00004.safetensors",
107
+ "layers.17.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
108
+ "layers.17.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
109
+ "layers.17.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
110
+ "layers.17.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
111
+ "layers.17.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
112
+ "layers.17.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
113
+ "layers.17.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
114
+ "layers.17.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
115
+ "layers.17.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
116
+ "layers.17.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
117
+ "layers.18.input_layernorm.weight": "model-00002-of-00004.safetensors",
118
+ "layers.18.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
119
+ "layers.18.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
120
+ "layers.18.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
121
+ "layers.18.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
122
+ "layers.18.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
123
+ "layers.18.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
124
+ "layers.18.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
125
+ "layers.18.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
126
+ "layers.18.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
127
+ "layers.18.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
128
+ "layers.19.input_layernorm.weight": "model-00002-of-00004.safetensors",
129
+ "layers.19.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
130
+ "layers.19.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
131
+ "layers.19.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
132
+ "layers.19.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
133
+ "layers.19.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
134
+ "layers.19.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
135
+ "layers.19.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
136
+ "layers.19.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
137
+ "layers.19.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
138
+ "layers.19.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
139
+ "layers.2.input_layernorm.weight": "model-00001-of-00004.safetensors",
140
+ "layers.2.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
141
+ "layers.2.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
142
+ "layers.2.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
143
+ "layers.2.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
144
+ "layers.2.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
145
+ "layers.2.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
146
+ "layers.2.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
147
+ "layers.2.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
148
+ "layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
149
+ "layers.2.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
150
+ "layers.20.input_layernorm.weight": "model-00002-of-00004.safetensors",
151
+ "layers.20.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
152
+ "layers.20.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
153
+ "layers.20.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
154
+ "layers.20.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
155
+ "layers.20.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
156
+ "layers.20.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
157
+ "layers.20.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
158
+ "layers.20.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
159
+ "layers.20.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
160
+ "layers.20.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
161
+ "layers.21.input_layernorm.weight": "model-00002-of-00004.safetensors",
162
+ "layers.21.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
163
+ "layers.21.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
164
+ "layers.21.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
165
+ "layers.21.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
166
+ "layers.21.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
167
+ "layers.21.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
168
+ "layers.21.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
169
+ "layers.21.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
170
+ "layers.21.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
171
+ "layers.21.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
172
+ "layers.22.input_layernorm.weight": "model-00003-of-00004.safetensors",
173
+ "layers.22.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
174
+ "layers.22.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
175
+ "layers.22.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
176
+ "layers.22.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
177
+ "layers.22.self_attn.k_norm.weight": "model-00002-of-00004.safetensors",
178
+ "layers.22.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
179
+ "layers.22.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
180
+ "layers.22.self_attn.q_norm.weight": "model-00002-of-00004.safetensors",
181
+ "layers.22.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
182
+ "layers.22.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
183
+ "layers.23.input_layernorm.weight": "model-00003-of-00004.safetensors",
184
+ "layers.23.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
185
+ "layers.23.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
186
+ "layers.23.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
187
+ "layers.23.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
188
+ "layers.23.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
189
+ "layers.23.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
190
+ "layers.23.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
191
+ "layers.23.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
192
+ "layers.23.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
193
+ "layers.23.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
194
+ "layers.24.input_layernorm.weight": "model-00003-of-00004.safetensors",
195
+ "layers.24.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
196
+ "layers.24.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
197
+ "layers.24.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
198
+ "layers.24.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
199
+ "layers.24.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
200
+ "layers.24.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
201
+ "layers.24.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
202
+ "layers.24.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
203
+ "layers.24.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
204
+ "layers.24.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
205
+ "layers.25.input_layernorm.weight": "model-00003-of-00004.safetensors",
206
+ "layers.25.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
207
+ "layers.25.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
208
+ "layers.25.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
209
+ "layers.25.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
210
+ "layers.25.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
211
+ "layers.25.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
212
+ "layers.25.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
213
+ "layers.25.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
214
+ "layers.25.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
215
+ "layers.25.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
216
+ "layers.26.input_layernorm.weight": "model-00003-of-00004.safetensors",
217
+ "layers.26.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
218
+ "layers.26.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
219
+ "layers.26.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
220
+ "layers.26.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
221
+ "layers.26.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
222
+ "layers.26.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
223
+ "layers.26.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
224
+ "layers.26.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
225
+ "layers.26.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
226
+ "layers.26.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
227
+ "layers.27.input_layernorm.weight": "model-00003-of-00004.safetensors",
228
+ "layers.27.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
229
+ "layers.27.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
230
+ "layers.27.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
231
+ "layers.27.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
232
+ "layers.27.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
233
+ "layers.27.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
234
+ "layers.27.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
235
+ "layers.27.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
236
+ "layers.27.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
237
+ "layers.27.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
238
+ "layers.28.input_layernorm.weight": "model-00003-of-00004.safetensors",
239
+ "layers.28.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
240
+ "layers.28.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
241
+ "layers.28.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
242
+ "layers.28.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
243
+ "layers.28.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
244
+ "layers.28.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
245
+ "layers.28.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
246
+ "layers.28.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
247
+ "layers.28.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
248
+ "layers.28.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
249
+ "layers.29.input_layernorm.weight": "model-00003-of-00004.safetensors",
250
+ "layers.29.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
251
+ "layers.29.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
252
+ "layers.29.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
253
+ "layers.29.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
254
+ "layers.29.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
255
+ "layers.29.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
256
+ "layers.29.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
257
+ "layers.29.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
258
+ "layers.29.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
259
+ "layers.29.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
260
+ "layers.3.input_layernorm.weight": "model-00001-of-00004.safetensors",
261
+ "layers.3.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
262
+ "layers.3.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
263
+ "layers.3.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
264
+ "layers.3.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
265
+ "layers.3.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
266
+ "layers.3.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
267
+ "layers.3.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
268
+ "layers.3.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
269
+ "layers.3.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
270
+ "layers.3.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
271
+ "layers.30.input_layernorm.weight": "model-00003-of-00004.safetensors",
272
+ "layers.30.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
273
+ "layers.30.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
274
+ "layers.30.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
275
+ "layers.30.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
276
+ "layers.30.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
277
+ "layers.30.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
278
+ "layers.30.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
279
+ "layers.30.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
280
+ "layers.30.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
281
+ "layers.30.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
282
+ "layers.31.input_layernorm.weight": "model-00003-of-00004.safetensors",
283
+ "layers.31.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
284
+ "layers.31.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
285
+ "layers.31.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
286
+ "layers.31.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
287
+ "layers.31.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
288
+ "layers.31.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
289
+ "layers.31.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
290
+ "layers.31.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
291
+ "layers.31.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
292
+ "layers.31.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
293
+ "layers.32.input_layernorm.weight": "model-00003-of-00004.safetensors",
294
+ "layers.32.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
295
+ "layers.32.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
296
+ "layers.32.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
297
+ "layers.32.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
298
+ "layers.32.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
299
+ "layers.32.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
300
+ "layers.32.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
301
+ "layers.32.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
302
+ "layers.32.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
303
+ "layers.32.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
304
+ "layers.33.input_layernorm.weight": "model-00003-of-00004.safetensors",
305
+ "layers.33.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
306
+ "layers.33.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
307
+ "layers.33.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
308
+ "layers.33.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
309
+ "layers.33.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
310
+ "layers.33.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
311
+ "layers.33.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
312
+ "layers.33.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
313
+ "layers.33.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
314
+ "layers.33.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
315
+ "layers.34.input_layernorm.weight": "model-00003-of-00004.safetensors",
316
+ "layers.34.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
317
+ "layers.34.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
318
+ "layers.34.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
319
+ "layers.34.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
320
+ "layers.34.self_attn.k_norm.weight": "model-00003-of-00004.safetensors",
321
+ "layers.34.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
322
+ "layers.34.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
323
+ "layers.34.self_attn.q_norm.weight": "model-00003-of-00004.safetensors",
324
+ "layers.34.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
325
+ "layers.34.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
326
+ "layers.35.input_layernorm.weight": "model-00004-of-00004.safetensors",
327
+ "layers.35.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
328
+ "layers.35.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
329
+ "layers.35.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
330
+ "layers.35.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
331
+ "layers.35.self_attn.k_norm.weight": "model-00004-of-00004.safetensors",
332
+ "layers.35.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
333
+ "layers.35.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
334
+ "layers.35.self_attn.q_norm.weight": "model-00004-of-00004.safetensors",
335
+ "layers.35.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
336
+ "layers.35.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
337
+ "layers.4.input_layernorm.weight": "model-00001-of-00004.safetensors",
338
+ "layers.4.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
339
+ "layers.4.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
340
+ "layers.4.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
341
+ "layers.4.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
342
+ "layers.4.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
343
+ "layers.4.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
344
+ "layers.4.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
345
+ "layers.4.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
346
+ "layers.4.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
347
+ "layers.4.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
348
+ "layers.5.input_layernorm.weight": "model-00001-of-00004.safetensors",
349
+ "layers.5.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
350
+ "layers.5.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
351
+ "layers.5.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
352
+ "layers.5.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
353
+ "layers.5.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
354
+ "layers.5.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
355
+ "layers.5.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
356
+ "layers.5.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
357
+ "layers.5.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
358
+ "layers.5.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
359
+ "layers.6.input_layernorm.weight": "model-00001-of-00004.safetensors",
360
+ "layers.6.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
361
+ "layers.6.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
362
+ "layers.6.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
363
+ "layers.6.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
364
+ "layers.6.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
365
+ "layers.6.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
366
+ "layers.6.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
367
+ "layers.6.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
368
+ "layers.6.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
369
+ "layers.6.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
370
+ "layers.7.input_layernorm.weight": "model-00001-of-00004.safetensors",
371
+ "layers.7.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
372
+ "layers.7.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
373
+ "layers.7.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
374
+ "layers.7.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
375
+ "layers.7.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
376
+ "layers.7.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
377
+ "layers.7.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
378
+ "layers.7.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
379
+ "layers.7.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
380
+ "layers.7.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
381
+ "layers.8.input_layernorm.weight": "model-00001-of-00004.safetensors",
382
+ "layers.8.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
383
+ "layers.8.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
384
+ "layers.8.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
385
+ "layers.8.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
386
+ "layers.8.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
387
+ "layers.8.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
388
+ "layers.8.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
389
+ "layers.8.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
390
+ "layers.8.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
391
+ "layers.8.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
392
+ "layers.9.input_layernorm.weight": "model-00002-of-00004.safetensors",
393
+ "layers.9.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
394
+ "layers.9.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
395
+ "layers.9.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
396
+ "layers.9.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
397
+ "layers.9.self_attn.k_norm.weight": "model-00001-of-00004.safetensors",
398
+ "layers.9.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
399
+ "layers.9.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
400
+ "layers.9.self_attn.q_norm.weight": "model-00001-of-00004.safetensors",
401
+ "layers.9.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
402
+ "layers.9.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
403
+ "norm.weight": "model-00004-of-00004.safetensors"
404
+ }
405
+ }
model_auto.py ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import AutoModel
2
+ from huggingface_hub import HfApi, snapshot_download
3
+
4
+ def get_model(model_name: str):
5
+ """
6
+ Load a model from the Hugging Face model hub.
7
+
8
+ Args:
9
+ model_name (str): The name of the model to load.
10
+
11
+ Returns:
12
+ transformers.PreTrainedModel: The loaded model.
13
+ """
14
+ return AutoModel.from_pretrained(model_name, torch_dtype="bfloat16")
15
+
16
+
17
+ def upload_and_convert(
18
+ model_name: str = "mixedbread-ai/mxbai-rerank-base-v2",
19
+ ):
20
+ """Upload the converted sequence classifier to the hub."""
21
+ model = get_model(model_name)
22
+ split_name = model_name.split("/")[-1]
23
+
24
+
25
+
26
+ snapshot_download(f"{model_name}", local_dir=f"./{split_name}")
27
+ model.save_pretrained(f"./{split_name}")
28
+
29
+ api = HfApi()
30
+ api.create_repo(repo_id=f"michaelfeil/{split_name}-auto", exist_ok=True)
31
+ api.upload_folder(
32
+ repo_id=f"michaelfeil/{split_name}-auto",
33
+ folder_path=f"./{split_name}",
34
+ )
35
+
36
+
37
+ if __name__ == "__main__":
38
+ upload_and_convert(
39
+ model_name="Qwen/Qwen3-Embedding-0.6B",
40
+ )
41
+ upload_and_convert(
42
+ model_name="Qwen/Qwen3-Embedding-4B",
43
+ )
44
+ upload_and_convert(
45
+ model_name="Qwen/Qwen3-Embedding-8B",
46
+ )
47
+ print("Model uploaded successfully.")
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83cdf8c3a34f68862319cb1810ee7b1e2c0a44e0864ae930194ddb76bb7feb8d
3
+ size 11422947
tokenizer_config.json ADDED
@@ -0,0 +1,208 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ }
181
+ },
182
+ "additional_special_tokens": [
183
+ "<|im_start|>",
184
+ "<|im_end|>",
185
+ "<|object_ref_start|>",
186
+ "<|object_ref_end|>",
187
+ "<|box_start|>",
188
+ "<|box_end|>",
189
+ "<|quad_start|>",
190
+ "<|quad_end|>",
191
+ "<|vision_start|>",
192
+ "<|vision_end|>",
193
+ "<|vision_pad|>",
194
+ "<|image_pad|>",
195
+ "<|video_pad|>"
196
+ ],
197
+ "bos_token": null,
198
+ "chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n {%- else %}\n {{- 'You are a helpful assistant.' }}\n {%- endif %}\n {{- \"\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0]['role'] == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}\n {%- else %}\n {{- '<|im_start|>system\\nYou are a helpful assistant.<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) or (message.role == \"assistant\" and not message.tool_calls) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role }}\n {%- if message.content %}\n {{- '\\n' + message.content }}\n {%- endif %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '\\n<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}\n",
199
+ "clean_up_tokenization_spaces": false,
200
+ "eos_token": "<|im_end|>",
201
+ "errors": "replace",
202
+ "extra_special_tokens": {},
203
+ "model_max_length": 131072,
204
+ "pad_token": "<|endoftext|>",
205
+ "split_special_tokens": false,
206
+ "tokenizer_class": "Qwen2Tokenizer",
207
+ "unk_token": null
208
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff