BaoLocTown commited on
Commit
dded981
·
verified ·
1 Parent(s): f7e5ea5

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ hist_ner_type_test.png filter=lfs diff=lfs merge=lfs -text
37
+ hist_ner_type_train.png filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - vi
6
+ library_name: gliner
7
+ datasets:
8
+ - urchade/pile-mistral-v0.1
9
+ - numind/NuNER
10
+ - knowledgator/GLINER-multi-task-synthetic-data
11
+ pipeline_tag: token-classification
12
+ tags:
13
+ - NER
14
+ - GLiNER
15
+ - information extraction
16
+ - encoder
17
+ - entity recognition
18
+ ---
19
+ # About
20
+
21
+ GLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoders (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibility, are costly and large for resource-constrained scenarios.
22
+
23
+ This particular version utilize bi-encoder architecture, where textual encoder is [DeBERTa v3 large](microsoft/deberta-v3-large) and entity label encoder is sentence transformer - [BGE-base-en](https://huggingface.co/BAAI/bge-small-en-v1.5).
24
+
25
+ Such architecture brings several advantages over uni-encoder GLiNER:
26
+ * An unlimited amount of entities can be recognized at a single time;
27
+ * Faster inference if entity embeddings are preprocessed;
28
+ * Better generalization to unseen entities;
29
+
30
+ However, it has some drawbacks such as a lack of inter-label interactions that make it hard for the model to disambiguate semantically similar but contextually different entities.
31
+
32
+ ### Installation & Usage
33
+ Install or update the gliner package:
34
+ ```bash
35
+ pip install gliner -U
36
+ ```
37
+
38
+ Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using `GLiNER.from_pretrained` and predict entities with `predict_entities`.
39
+
40
+ ```python
41
+ from gliner import GLiNER
42
+
43
+ model = GLiNER.from_pretrained("knowledgator/gliner-bi-large-v1.0")
44
+
45
+ text = """
46
+ Cristiano Ronaldo dos Santos Aveiro (Portuguese pronunciation: [kɾiʃˈtjɐnu ʁɔˈnaldu]; born 5 February 1985) is a Portuguese professional footballer who plays as a forward for and captains both Saudi Pro League club Al Nassr and the Portugal national team. Widely regarded as one of the greatest players of all time, Ronaldo has won five Ballon d'Or awards,[note 3] a record three UEFA Men's Player of the Year Awards, and four European Golden Shoes, the most by a European player. He has won 33 trophies in his career, including seven league titles, five UEFA Champions Leagues, the UEFA European Championship and the UEFA Nations League. Ronaldo holds the records for most appearances (183), goals (140) and assists (42) in the Champions League, goals in the European Championship (14), international goals (128) and international appearances (205). He is one of the few players to have made over 1,200 professional career appearances, the most by an outfield player, and has scored over 850 official senior career goals for club and country, making him the top goalscorer of all time.
47
+ """
48
+
49
+ labels = ["person", "award", "date", "competitions", "teams"]
50
+
51
+ entities = model.predict_entities(text, labels, threshold=0.3)
52
+
53
+ for entity in entities:
54
+ print(entity["text"], "=>", entity["label"])
55
+ ```
56
+
57
+ ```
58
+ Cristiano Ronaldo dos Santos Aveiro => person
59
+ 5 February 1985 => date
60
+ Al Nassr => teams
61
+ Portugal national team => teams
62
+ Ballon d'Or => award
63
+ UEFA Men's Player of the Year Awards => award
64
+ European Golden Shoes => award
65
+ UEFA Champions Leagues => competitions
66
+ UEFA European Championship => competitions
67
+ UEFA Nations League => competitions
68
+ Champions League => competitions
69
+ European Championship => competitions
70
+ ```
71
+
72
+ If you have a large amount of entities and want to pre-embed them, please, refer to the following code snippet:
73
+
74
+ ```python
75
+ labels = ["your entities"]
76
+ texts = ["your texts"]
77
+
78
+ entity_embeddings = model.encode_labels(labels, batch_size = 8)
79
+
80
+ outputs = model.batch_predict_with_embeds(texts, entity_embeddings, labels)
81
+ ```
82
+
83
+ ### Benchmarks
84
+ Below you can see the table with benchmarking results on various named entity recognition datasets:
85
+
86
+ | Dataset | Score |
87
+ |---------|-------|
88
+ | ACE 2004 | 29.1% |
89
+ | ACE 2005 | 32.7% |
90
+ | AnatEM | 35.1% |
91
+ | Broad Tweet Corpus | 64.9% |
92
+ | CoNLL 2003 | 62.8% |
93
+ | FabNER | 21.8% |
94
+ | FindVehicle | 37.1% |
95
+ | GENIA_NER | 56.2% |
96
+ | HarveyNER | 11.7% |
97
+ | MultiNERD | 58.8% |
98
+ | Ontonotes | 24.0% |
99
+ | PolyglotNER | 43.2% |
100
+ | TweetNER7 | 35.1% |
101
+ | WikiANN en | 54.8% |
102
+ | WikiNeural | 70.4% |
103
+ | bc2gm | 59.9% |
104
+ | bc4chemd | 48.2% |
105
+ | bc5cdr | 69.2% |
106
+ | ncbi | 67.0% |
107
+ | **Average** | **46.4%** |
108
+ |||
109
+ | CrossNER_AI | 49.2% |
110
+ | CrossNER_literature | 62.1% |
111
+ | CrossNER_music | 70.3% |
112
+ | CrossNER_politics | 70.0% |
113
+ | CrossNER_science | 65.7% |
114
+ | mit-movie | 36.9% |
115
+ | mit-restaurant | 42.5% |
116
+ | **Average (zero-shot benchmark)** | **56.7%** |
117
+
118
+ ### Join Our Discord
119
+
120
+ Connect with our community on Discord for news, support, and discussion about our models. Join [Discord](https://discord.gg/dkyeAgs9DG).
added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "[MASK]": 128000
3
+ }
count.json ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "diagnostics": 1302,
3
+ "date": 25179,
4
+ "blood type": 3101,
5
+ "organization": 37540,
6
+ "occupation": 2860,
7
+ "date time": 10263,
8
+ "last name": 6257,
9
+ "meddevicetechnique": 1460,
10
+ "datetime": 2996,
11
+ "license plate": 878,
12
+ "quantity": 18083,
13
+ "cvv": 1665,
14
+ "fooddrink": 4882,
15
+ "biometric identifier": 114,
16
+ "swift bic": 333,
17
+ "tax id": 3130,
18
+ "personalcare": 2356,
19
+ "location": 16087,
20
+ "phone number": 22377,
21
+ "education level": 2095,
22
+ "account number": 7455,
23
+ "drugchemical": 9503,
24
+ "country": 15005,
25
+ "street address": 15208,
26
+ "preventivemed": 1529,
27
+ "bank routing number": 1549,
28
+ "api key": 964,
29
+ "age": 3524,
30
+ "event": 3192,
31
+ "password": 958,
32
+ "diseasesymtom": 11770,
33
+ "gender": 2245,
34
+ "city": 28410,
35
+ "miscellaneous": 1766,
36
+ "product": 10464,
37
+ "organ": 2263,
38
+ "ipv4": 589,
39
+ "email": 17576,
40
+ "skill": 2760,
41
+ "user name": 819,
42
+ "surgery": 3939,
43
+ "persontype": 6482,
44
+ "person": 32677,
45
+ "treatment": 3081,
46
+ "unitcalibrator": 1261,
47
+ "transportation": 244,
48
+ "date of birth": 12824,
49
+ "postcode": 1878,
50
+ "device identifier": 1921,
51
+ "company name": 29040,
52
+ "url": 2108
53
+ }
count_synthetic.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "date": 25179,
3
+ "city": 28410,
4
+ "password": 958,
5
+ "blood type": 3101,
6
+ "device identifier": 1921,
7
+ "url": 1758,
8
+ "biometric identifier": 114,
9
+ "occupation": 2315,
10
+ "education level": 2095,
11
+ "cvv": 1665,
12
+ "date of birth": 12824,
13
+ "tax id": 3130,
14
+ "last name": 6257,
15
+ "swift bic": 333,
16
+ "phone number": 22119,
17
+ "gender": 2035,
18
+ "bank routing number": 1549,
19
+ "age": 3069,
20
+ "country": 15005,
21
+ "account number": 7455,
22
+ "license plate": 878,
23
+ "user name": 819,
24
+ "email": 17480,
25
+ "postcode": 1878,
26
+ "company name": 29040,
27
+ "date time": 3213,
28
+ "api key": 964,
29
+ "street address": 14562
30
+ }
count_test.json ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "diagnostics": 127,
3
+ "date": 1312,
4
+ "blood type": 144,
5
+ "organization": 3150,
6
+ "occupation": 271,
7
+ "date time": 1480,
8
+ "last name": 322,
9
+ "meddevicetechnique": 119,
10
+ "datetime": 292,
11
+ "license plate": 31,
12
+ "quantity": 1513,
13
+ "cvv": 93,
14
+ "fooddrink": 284,
15
+ "biometric identifier": 7,
16
+ "swift bic": 14,
17
+ "tax id": 197,
18
+ "personalcare": 199,
19
+ "location": 1691,
20
+ "phone number": 1178,
21
+ "education level": 108,
22
+ "account number": 387,
23
+ "drugchemical": 707,
24
+ "country": 737,
25
+ "street address": 780,
26
+ "preventivemed": 139,
27
+ "bank routing number": 67,
28
+ "api key": 52,
29
+ "age": 290,
30
+ "event": 265,
31
+ "password": 52,
32
+ "diseasesymtom": 1199,
33
+ "gender": 174,
34
+ "city": 1475,
35
+ "miscellaneous": 236,
36
+ "product": 1064,
37
+ "organ": 492,
38
+ "ipv4": 47,
39
+ "email": 899,
40
+ "skill": 185,
41
+ "user name": 33,
42
+ "surgery": 221,
43
+ "persontype": 1034,
44
+ "person": 3091,
45
+ "treatment": 288,
46
+ "unitcalibrator": 243,
47
+ "transportation": 22,
48
+ "date of birth": 645,
49
+ "postcode": 119,
50
+ "device identifier": 109,
51
+ "company name": 1440,
52
+ "url": 109
53
+ }
count_vietmed.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "diagnostics": 373,
3
+ "surgery": 200,
4
+ "diseasesymtom": 2966,
5
+ "treatment": 740,
6
+ "fooddrink": 257,
7
+ "unitcalibrator": 822,
8
+ "transportation": 5,
9
+ "gender": 210,
10
+ "personalcare": 383,
11
+ "location": 292,
12
+ "organization": 19,
13
+ "occupation": 545,
14
+ "drugchemical": 1127,
15
+ "meddevicetechnique": 327,
16
+ "datetime": 695,
17
+ "preventivemed": 343,
18
+ "organ": 1972,
19
+ "age": 455
20
+ }
count_vlsp_2021.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "quantity": 5048,
3
+ "persontype": 5304,
4
+ "person": 9762,
5
+ "skill": 79,
6
+ "organization": 9526,
7
+ "phone number": 258,
8
+ "location": 9270,
9
+ "miscellaneous": 1480,
10
+ "date time": 7050,
11
+ "product": 3358,
12
+ "street address": 646,
13
+ "ipv4": 66,
14
+ "email": 96,
15
+ "url": 350,
16
+ "event": 1362
17
+ }
gliner_config.json ADDED
@@ -0,0 +1,221 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "class_token_index": -1,
3
+ "decoder_mode": null,
4
+ "dropout": 0.3,
5
+ "embed_ent_token": true,
6
+ "encoder_config": {
7
+ "_name_or_path": "microsoft/deberta-v3-large",
8
+ "add_cross_attention": false,
9
+ "architectures": null,
10
+ "attention_probs_dropout_prob": 0.1,
11
+ "bad_words_ids": null,
12
+ "begin_suppress_tokens": null,
13
+ "bos_token_id": null,
14
+ "chunk_size_feed_forward": 0,
15
+ "cross_attention_hidden_size": null,
16
+ "decoder_start_token_id": null,
17
+ "diversity_penalty": 0.0,
18
+ "do_sample": false,
19
+ "early_stopping": false,
20
+ "encoder_no_repeat_ngram_size": 0,
21
+ "eos_token_id": null,
22
+ "exponential_decay_length_penalty": null,
23
+ "finetuning_task": null,
24
+ "forced_bos_token_id": null,
25
+ "forced_eos_token_id": null,
26
+ "hidden_act": "gelu",
27
+ "hidden_dropout_prob": 0.1,
28
+ "hidden_size": 1024,
29
+ "id2label": {
30
+ "0": "LABEL_0",
31
+ "1": "LABEL_1"
32
+ },
33
+ "initializer_range": 0.02,
34
+ "intermediate_size": 4096,
35
+ "is_decoder": false,
36
+ "is_encoder_decoder": false,
37
+ "label2id": {
38
+ "LABEL_0": 0,
39
+ "LABEL_1": 1
40
+ },
41
+ "layer_norm_eps": 1e-07,
42
+ "length_penalty": 1.0,
43
+ "max_length": 20,
44
+ "max_position_embeddings": 512,
45
+ "max_relative_positions": -1,
46
+ "min_length": 0,
47
+ "model_type": "deberta-v2",
48
+ "no_repeat_ngram_size": 0,
49
+ "norm_rel_ebd": "layer_norm",
50
+ "num_attention_heads": 16,
51
+ "num_beam_groups": 1,
52
+ "num_beams": 1,
53
+ "num_hidden_layers": 24,
54
+ "num_return_sequences": 1,
55
+ "output_attentions": false,
56
+ "output_hidden_states": false,
57
+ "output_scores": false,
58
+ "pad_token_id": 0,
59
+ "pooler_dropout": 0,
60
+ "pooler_hidden_act": "gelu",
61
+ "pooler_hidden_size": 1024,
62
+ "pos_att_type": [
63
+ "p2c",
64
+ "c2p"
65
+ ],
66
+ "position_biased_input": false,
67
+ "position_buckets": 256,
68
+ "prefix": null,
69
+ "problem_type": null,
70
+ "pruned_heads": {},
71
+ "relative_attention": true,
72
+ "remove_invalid_values": false,
73
+ "repetition_penalty": 1.0,
74
+ "return_dict": true,
75
+ "return_dict_in_generate": false,
76
+ "sep_token_id": null,
77
+ "share_att_key": true,
78
+ "suppress_tokens": null,
79
+ "task_specific_params": null,
80
+ "temperature": 1.0,
81
+ "tf_legacy_loss": false,
82
+ "tie_encoder_decoder": false,
83
+ "tie_word_embeddings": true,
84
+ "tokenizer_class": null,
85
+ "top_k": 50,
86
+ "top_p": 1.0,
87
+ "torch_dtype": null,
88
+ "torchscript": false,
89
+ "type_vocab_size": 0,
90
+ "typical_p": 1.0,
91
+ "use_bfloat16": false,
92
+ "vocab_size": 128100
93
+ },
94
+ "ent_token": "<<ENT>>",
95
+ "eval_every": 10000,
96
+ "fine_tune": true,
97
+ "freeze_token_rep": false,
98
+ "full_decoder_context": true,
99
+ "fuse_layers": false,
100
+ "has_rnn": true,
101
+ "hidden_size": 768,
102
+ "label_smoothing": 0,
103
+ "labels_decoder": null,
104
+ "labels_decoder_config": null,
105
+ "labels_encoder": "BAAI/bge-base-en-v1.5",
106
+ "labels_encoder_config": {
107
+ "_name_or_path": "BAAI/bge-base-en-v1.5",
108
+ "add_cross_attention": false,
109
+ "architectures": [
110
+ "BertModel"
111
+ ],
112
+ "attention_probs_dropout_prob": 0.1,
113
+ "bad_words_ids": null,
114
+ "begin_suppress_tokens": null,
115
+ "bos_token_id": null,
116
+ "chunk_size_feed_forward": 0,
117
+ "classifier_dropout": null,
118
+ "cross_attention_hidden_size": null,
119
+ "decoder_start_token_id": null,
120
+ "diversity_penalty": 0.0,
121
+ "do_sample": false,
122
+ "early_stopping": false,
123
+ "encoder_no_repeat_ngram_size": 0,
124
+ "eos_token_id": null,
125
+ "exponential_decay_length_penalty": null,
126
+ "finetuning_task": null,
127
+ "forced_bos_token_id": null,
128
+ "forced_eos_token_id": null,
129
+ "gradient_checkpointing": false,
130
+ "hidden_act": "gelu",
131
+ "hidden_dropout_prob": 0.1,
132
+ "hidden_size": 768,
133
+ "id2label": {
134
+ "0": "LABEL_0"
135
+ },
136
+ "initializer_range": 0.02,
137
+ "intermediate_size": 3072,
138
+ "is_decoder": false,
139
+ "is_encoder_decoder": false,
140
+ "label2id": {
141
+ "LABEL_0": 0
142
+ },
143
+ "layer_norm_eps": 1e-12,
144
+ "length_penalty": 1.0,
145
+ "max_length": 20,
146
+ "max_position_embeddings": 512,
147
+ "min_length": 0,
148
+ "model_type": "bert",
149
+ "no_repeat_ngram_size": 0,
150
+ "num_attention_heads": 12,
151
+ "num_beam_groups": 1,
152
+ "num_beams": 1,
153
+ "num_hidden_layers": 12,
154
+ "num_return_sequences": 1,
155
+ "output_attentions": false,
156
+ "output_hidden_states": false,
157
+ "output_scores": false,
158
+ "pad_token_id": 0,
159
+ "position_embedding_type": "absolute",
160
+ "prefix": null,
161
+ "problem_type": null,
162
+ "pruned_heads": {},
163
+ "remove_invalid_values": false,
164
+ "repetition_penalty": 1.0,
165
+ "return_dict": true,
166
+ "return_dict_in_generate": false,
167
+ "sep_token_id": null,
168
+ "suppress_tokens": null,
169
+ "task_specific_params": null,
170
+ "temperature": 1.0,
171
+ "tf_legacy_loss": false,
172
+ "tie_encoder_decoder": false,
173
+ "tie_word_embeddings": true,
174
+ "tokenizer_class": null,
175
+ "top_k": 50,
176
+ "top_p": 1.0,
177
+ "torch_dtype": "float32",
178
+ "torchscript": false,
179
+ "type_vocab_size": 2,
180
+ "typical_p": 1.0,
181
+ "use_bfloat16": false,
182
+ "use_cache": true,
183
+ "vocab_size": 30522
184
+ },
185
+ "log_dir": "deberta/",
186
+ "loss_alpha": 0.8,
187
+ "loss_gamma": 2,
188
+ "loss_reduction": "sum",
189
+ "lr_encoder": "1e-5",
190
+ "lr_others": "3e-5",
191
+ "max_grad_norm": 10.0,
192
+ "max_len": 512,
193
+ "max_neg_type_ratio": 1,
194
+ "max_types": 100,
195
+ "max_width": 12,
196
+ "model_name": "microsoft/deberta-v3-large",
197
+ "model_type": "gliner",
198
+ "name": "span level gliner",
199
+ "num_post_fusion_layers": 1,
200
+ "num_steps": 100000,
201
+ "post_fusion_schema": "",
202
+ "prev_path": null,
203
+ "random_drop": true,
204
+ "root_dir": "gliner_logs",
205
+ "save_total_limit": 3,
206
+ "scheduler_type": "cosine",
207
+ "sep_token": "<<SEP>>",
208
+ "shuffle_types": true,
209
+ "size_sup": -1,
210
+ "span_mode": "markerV0",
211
+ "subtoken_pooling": "first",
212
+ "train_batch_size": 8,
213
+ "train_data": "data/nuner_train.json",
214
+ "transformers_version": "4.43.4",
215
+ "val_data_dir": "none",
216
+ "vocab_size": -1,
217
+ "warmup_ratio": 0.05,
218
+ "weight_decay_encoder": 0.1,
219
+ "weight_decay_other": 0.01,
220
+ "words_splitter_type": "whitespace"
221
+ }
hist_ner_type_test.png ADDED

Git LFS Details

  • SHA256: 0c359cd5ce1283d4217b485407c20ffcb5c9fb590a95a704c1f13a8a55698fd3
  • Pointer size: 131 Bytes
  • Size of remote file: 553 kB
hist_ner_type_train.png ADDED

Git LFS Details

  • SHA256: 3699220cc944a28616221cff4d1fe4ccf81f6fe2ab57da9538f81f99e4c405e3
  • Pointer size: 131 Bytes
  • Size of remote file: 609 kB
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b54d618361f4ad399da39b2eee93efdad25517134836b40f04c588d68a2b785d
3
+ size 2276530250
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "[CLS]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "[SEP]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "[MASK]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "[PAD]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "[SEP]",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "[UNK]",
46
+ "lstrip": false,
47
+ "normalized": true,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
spm.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c679fbf93643d19aab7ee10c0b99e460bdbc02fedf34b92b05af343b4af586fd
3
+ size 2464616
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "[CLS]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "[SEP]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "[UNK]",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "128000": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "bos_token": "[CLS]",
45
+ "clean_up_tokenization_spaces": true,
46
+ "cls_token": "[CLS]",
47
+ "do_lower_case": false,
48
+ "eos_token": "[SEP]",
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 1000000000000000019884624838656,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "sp_model_kwargs": {},
54
+ "split_by_punct": false,
55
+ "tokenizer_class": "DebertaV2Tokenizer",
56
+ "unk_token": "[UNK]",
57
+ "vocab_type": "spm"
58
+ }