MaziyarPanahi commited on
Commit
9188755
·
verified ·
1 Parent(s): 5b6093c

Upload French PII detection model OpenMed-PII-French-ClinicalBGE-Large-568M-v1

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,305 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - fr
4
+ license: apache-2.0
5
+ base_model: BAAI/bge-m3
6
+ tags:
7
+ - token-classification
8
+ - ner
9
+ - pii
10
+ - pii-detection
11
+ - de-identification
12
+ - privacy
13
+ - healthcare
14
+ - medical
15
+ - clinical
16
+ - phi
17
+ - french
18
+ - pytorch
19
+ - transformers
20
+ - openmed
21
+ pipeline_tag: token-classification
22
+ library_name: transformers
23
+ metrics:
24
+ - f1
25
+ - precision
26
+ - recall
27
+ model-index:
28
+ - name: OpenMed-PII-French-ClinicalBGE-568M-v1
29
+ results:
30
+ - task:
31
+ type: token-classification
32
+ name: Named Entity Recognition
33
+ dataset:
34
+ name: AI4Privacy (French subset)
35
+ type: ai4privacy/pii-masking-400k
36
+ split: test
37
+ metrics:
38
+ - type: f1
39
+ value: 0.9733
40
+ name: F1 (micro)
41
+ - type: precision
42
+ value: 0.9718
43
+ name: Precision
44
+ - type: recall
45
+ value: 0.9748
46
+ name: Recall
47
+ widget:
48
+ - text: "Dr. Jean Dupont (NSS: 1 85 12 75 108 123 45) peut être contacté à jean.dupont@hopital.fr ou au 06 12 34 56 78. Il habite au 15 Rue de la Paix, 75002 Paris."
49
+ example_title: Clinical Note with PII (French)
50
+ ---
51
+
52
+ # OpenMed-PII-French-ClinicalBGE-568M-v1
53
+
54
+ **French PII Detection Model** | 568M Parameters | Open Source
55
+
56
+ [![F1 Score](https://img.shields.io/badge/F1-97.33%25-brightgreen)]() [![Precision](https://img.shields.io/badge/Precision-97.18%25-blue)]() [![Recall](https://img.shields.io/badge/Recall-97.48%25-orange)]()
57
+
58
+ ## Model Description
59
+
60
+ **OpenMed-PII-French-ClinicalBGE-568M-v1** is a transformer-based token classification model fine-tuned for **Personally Identifiable Information (PII) detection in French text**. This model identifies and classifies **54 types of sensitive information** including names, addresses, social security numbers, medical record numbers, and more.
61
+
62
+ ### Key Features
63
+
64
+ - **French-Optimized**: Specifically trained on French text for optimal performance
65
+ - **High Accuracy**: Achieves strong F1 scores across diverse PII categories
66
+ - **Comprehensive Coverage**: Detects 55+ entity types spanning personal, financial, medical, and contact information
67
+ - **Privacy-Focused**: Designed for de-identification and compliance with GDPR and other privacy regulations
68
+ - **Production-Ready**: Optimized for real-world text processing pipelines
69
+
70
+ ## Performance
71
+
72
+ Evaluated on the French subset of AI4Privacy dataset:
73
+
74
+ | Metric | Score |
75
+ |:---|:---:|
76
+ | **Micro F1** | **0.9733** |
77
+ | Precision | 0.9718 |
78
+ | Recall | 0.9748 |
79
+ | Macro F1 | 0.9667 |
80
+ | Weighted F1 | 0.9730 |
81
+ | Accuracy | 0.9963 |
82
+
83
+ ### Top 10 French PII Models
84
+
85
+ | Rank | Model | F1 | Precision | Recall |
86
+ |:---:|:---|:---:|:---:|:---:|
87
+ | 1 | [OpenMed-PII-French-SuperClinical-Large-434M-v1](https://huggingface.co/OpenMed/OpenMed-PII-French-SuperClinical-Large-434M-v1) | 0.9797 | 0.9790 | 0.9804 |
88
+ | 2 | [OpenMed-PII-French-EuroMed-210M-v1](https://huggingface.co/OpenMed/OpenMed-PII-French-EuroMed-210M-v1) | 0.9762 | 0.9747 | 0.9777 |
89
+ | **3** | **[OpenMed-PII-French-ClinicalBGE-568M-v1](https://huggingface.co/OpenMed/OpenMed-PII-French-ClinicalBGE-568M-v1)** | **0.9733** | **0.9718** | **0.9748** |
90
+ | 4 | [OpenMed-PII-French-BigMed-Large-560M-v1](https://huggingface.co/OpenMed/OpenMed-PII-French-BigMed-Large-560M-v1) | 0.9733 | 0.9716 | 0.9749 |
91
+ | 5 | [OpenMed-PII-French-SnowflakeMed-Large-568M-v1](https://huggingface.co/OpenMed/OpenMed-PII-French-SnowflakeMed-Large-568M-v1) | 0.9728 | 0.9711 | 0.9745 |
92
+ | 6 | [OpenMed-PII-French-SuperMedical-Large-355M-v1](https://huggingface.co/OpenMed/OpenMed-PII-French-SuperMedical-Large-355M-v1) | 0.9728 | 0.9712 | 0.9744 |
93
+ | 7 | [OpenMed-PII-French-NomicMed-Large-395M-v1](https://huggingface.co/OpenMed/OpenMed-PII-French-NomicMed-Large-395M-v1) | 0.9722 | 0.9704 | 0.9740 |
94
+ | 8 | [OpenMed-PII-French-mClinicalE5-Large-560M-v1](https://huggingface.co/OpenMed/OpenMed-PII-French-mClinicalE5-Large-560M-v1) | 0.9713 | 0.9697 | 0.9729 |
95
+ | 9 | [OpenMed-PII-French-mSuperClinical-Base-279M-v1](https://huggingface.co/OpenMed/OpenMed-PII-French-mSuperClinical-Base-279M-v1) | 0.9674 | 0.9662 | 0.9687 |
96
+ | 10 | [OpenMed-PII-French-ClinicalBGE-Large-335M-v1](https://huggingface.co/OpenMed/OpenMed-PII-French-ClinicalBGE-Large-335M-v1) | 0.9668 | 0.9644 | 0.9692 |
97
+
98
+ ## Supported Entity Types
99
+
100
+ This model detects **54 PII entity types** organized into categories:
101
+
102
+ <details>
103
+ <summary><strong>Identifiers</strong> (22 types)</summary>
104
+
105
+ | Entity | Description |
106
+ |:---|:---|
107
+ | `ACCOUNTNAME` | Accountname |
108
+ | `BANKACCOUNT` | Bankaccount |
109
+ | `BIC` | Bic |
110
+ | `BITCOINADDRESS` | Bitcoinaddress |
111
+ | `CREDITCARD` | Creditcard |
112
+ | `CREDITCARDISSUER` | Creditcardissuer |
113
+ | `CVV` | Cvv |
114
+ | `ETHEREUMADDRESS` | Ethereumaddress |
115
+ | `IBAN` | Iban |
116
+ | `IMEI` | Imei |
117
+ | ... | *and 12 more* |
118
+
119
+ </details>
120
+
121
+ <details>
122
+ <summary><strong>Personal Info</strong> (11 types)</summary>
123
+
124
+ | Entity | Description |
125
+ |:---|:---|
126
+ | `AGE` | Age |
127
+ | `DATEOFBIRTH` | Dateofbirth |
128
+ | `EYECOLOR` | Eyecolor |
129
+ | `FIRSTNAME` | Firstname |
130
+ | `GENDER` | Gender |
131
+ | `HEIGHT` | Height |
132
+ | `LASTNAME` | Lastname |
133
+ | `MIDDLENAME` | Middlename |
134
+ | `OCCUPATION` | Occupation |
135
+ | `PREFIX` | Prefix |
136
+ | ... | *and 1 more* |
137
+
138
+ </details>
139
+
140
+ <details>
141
+ <summary><strong>Contact Info</strong> (2 types)</summary>
142
+
143
+ | Entity | Description |
144
+ |:---|:---|
145
+ | `EMAIL` | Email |
146
+ | `PHONE` | Phone |
147
+
148
+ </details>
149
+
150
+ <details>
151
+ <summary><strong>Location</strong> (9 types)</summary>
152
+
153
+ | Entity | Description |
154
+ |:---|:---|
155
+ | `BUILDINGNUMBER` | Buildingnumber |
156
+ | `CITY` | City |
157
+ | `COUNTY` | County |
158
+ | `GPSCOORDINATES` | Gpscoordinates |
159
+ | `ORDINALDIRECTION` | Ordinaldirection |
160
+ | `SECONDARYADDRESS` | Secondaryaddress |
161
+ | `STATE` | State |
162
+ | `STREET` | Street |
163
+ | `ZIPCODE` | Zipcode |
164
+
165
+ </details>
166
+
167
+ <details>
168
+ <summary><strong>Organization</strong> (3 types)</summary>
169
+
170
+ | Entity | Description |
171
+ |:---|:---|
172
+ | `JOBDEPARTMENT` | Jobdepartment |
173
+ | `JOBTITLE` | Jobtitle |
174
+ | `ORGANIZATION` | Organization |
175
+
176
+ </details>
177
+
178
+ <details>
179
+ <summary><strong>Financial</strong> (5 types)</summary>
180
+
181
+ | Entity | Description |
182
+ |:---|:---|
183
+ | `AMOUNT` | Amount |
184
+ | `CURRENCY` | Currency |
185
+ | `CURRENCYCODE` | Currencycode |
186
+ | `CURRENCYNAME` | Currencyname |
187
+ | `CURRENCYSYMBOL` | Currencysymbol |
188
+
189
+ </details>
190
+
191
+ <details>
192
+ <summary><strong>Temporal</strong> (2 types)</summary>
193
+
194
+ | Entity | Description |
195
+ |:---|:---|
196
+ | `DATE` | Date |
197
+ | `TIME` | Time |
198
+
199
+ </details>
200
+
201
+ ## Usage
202
+
203
+ ### Quick Start
204
+
205
+ ```python
206
+ from transformers import pipeline
207
+
208
+ # Load the PII detection pipeline
209
+ ner = pipeline("ner", model="OpenMed/OpenMed-PII-French-ClinicalBGE-568M-v1", aggregation_strategy="simple")
210
+
211
+ text = """
212
+ Patient Jean Martin (né le 15/03/1985, NSS: 1 85 03 75 108 234 67) a été vu aujourd'hui.
213
+ Contact: jean.martin@email.fr, Téléphone: 06 12 34 56 78.
214
+ Adresse: 123 Avenue des Champs-Élysées, 75008 Paris.
215
+ """
216
+
217
+ entities = ner(text)
218
+ for entity in entities:
219
+ print(f"{entity['entity_group']}: {entity['word']} (score: {entity['score']:.3f})")
220
+ ```
221
+
222
+ ### De-identification Example
223
+
224
+ ```python
225
+ def redact_pii(text, entities, placeholder='[REDACTED]'):
226
+ """Replace detected PII with placeholders."""
227
+ # Sort entities by start position (descending) to preserve offsets
228
+ sorted_entities = sorted(entities, key=lambda x: x['start'], reverse=True)
229
+ redacted = text
230
+ for ent in sorted_entities:
231
+ redacted = redacted[:ent['start']] + f"[{ent['entity_group']}]" + redacted[ent['end']:]
232
+ return redacted
233
+
234
+ # Apply de-identification
235
+ redacted_text = redact_pii(text, entities)
236
+ print(redacted_text)
237
+ ```
238
+
239
+ ### Batch Processing
240
+
241
+ ```python
242
+ from transformers import AutoModelForTokenClassification, AutoTokenizer
243
+ import torch
244
+
245
+ model_name = "OpenMed/OpenMed-PII-French-ClinicalBGE-568M-v1"
246
+ model = AutoModelForTokenClassification.from_pretrained(model_name)
247
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
248
+
249
+ texts = [
250
+ "Patient Jean Martin (né le 15/03/1985, NSS: 1 85 03 75 108 234 67) a été vu aujourd'hui.",
251
+ "Contact: jean.martin@email.fr, Téléphone: 06 12 34 56 78.",
252
+ ]
253
+
254
+ inputs = tokenizer(texts, return_tensors='pt', padding=True, truncation=True)
255
+ with torch.no_grad():
256
+ outputs = model(**inputs)
257
+ predictions = torch.argmax(outputs.logits, dim=-1)
258
+ ```
259
+
260
+ ## Training Details
261
+
262
+ ### Dataset
263
+
264
+ - **Source**: [AI4Privacy PII Masking 400k](https://huggingface.co/datasets/ai4privacy/pii-masking-400k) (French subset)
265
+ - **Format**: BIO-tagged token classification
266
+ - **Labels**: 109 total (54 entity types × 2 BIO tags + O)
267
+
268
+ ### Training Configuration
269
+
270
+ - **Max Sequence Length**: 512 tokens
271
+ - **Epochs**: 3
272
+ - **Framework**: Hugging Face Transformers + Trainer API
273
+
274
+ ## Intended Use & Limitations
275
+
276
+ ### Intended Use
277
+
278
+ - **De-identification**: Automated redaction of PII in French clinical notes, medical records, and documents
279
+ - **Compliance**: Supporting GDPR, and other privacy regulation compliance
280
+ - **Data Preprocessing**: Preparing datasets for research by removing sensitive information
281
+ - **Audit Support**: Identifying PII in document collections
282
+
283
+ ### Limitations
284
+
285
+ **Important**: This model is intended as an **assistive tool**, not a replacement for human review.
286
+
287
+ - **False Negatives**: Some PII may not be detected; always verify critical applications
288
+ - **Context Sensitivity**: Performance may vary with domain-specific terminology
289
+ - **Language**: Optimized for French text; may not perform well on other languages
290
+
291
+ ## Citation
292
+
293
+ ```bibtex
294
+ @misc{openmed-pii-2026,
295
+ title = {OpenMed-PII-French-ClinicalBGE-568M-v1: French PII Detection Model},
296
+ author = {OpenMed Science},
297
+ year = {2026},
298
+ publisher = {Hugging Face},
299
+ url = {https://huggingface.co/OpenMed/OpenMed-PII-French-ClinicalBGE-568M-v1}
300
+ }
301
+ ```
302
+
303
+ ## Links
304
+
305
+ - **Organization**: [OpenMed](https://huggingface.co/OpenMed)
all_results.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_accuracy": 0.9961240086517664,
4
+ "eval_f1": 0.9726980003788187,
5
+ "eval_loss": 0.01059788279235363,
6
+ "eval_macro_f1": 0.9661009444248277,
7
+ "eval_precision": 0.971042679632631,
8
+ "eval_recall": 0.9743589743589743,
9
+ "eval_runtime": 6.0584,
10
+ "eval_samples_per_second": 1024.523,
11
+ "eval_steps_per_second": 16.011,
12
+ "eval_weighted_f1": 0.972552632037373,
13
+ "test_accuracy": 0.9962841708002117,
14
+ "test_f1": 0.9733174894674301,
15
+ "test_loss": 0.010280761867761612,
16
+ "test_macro_f1": 0.9666768552091859,
17
+ "test_precision": 0.9717952866310737,
18
+ "test_recall": 0.9748444684879632,
19
+ "test_runtime": 5.4192,
20
+ "test_samples_per_second": 1138.721,
21
+ "test_steps_per_second": 17.899,
22
+ "test_weighted_f1": 0.972977336746876,
23
+ "total_flos": 1.647057060233216e+16,
24
+ "train_loss": 0.1105563929324509,
25
+ "train_runtime": 942.4834,
26
+ "train_samples_per_second": 157.817,
27
+ "train_steps_per_second": 2.467
28
+ }
classification_report.txt ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Classification Report for French PII Detection
2
+ Model: BAAI/bge-m3
3
+ ============================================================
4
+
5
+ precision recall f1-score support
6
+
7
+ ACCOUNTNAME 1.00 1.00 1.00 360
8
+ AGE 0.97 0.99 0.98 389
9
+ AMOUNT 1.00 1.00 1.00 104
10
+ BANKACCOUNT 1.00 1.00 1.00 312
11
+ BIC 1.00 1.00 1.00 98
12
+ BITCOINADDRESS 0.97 0.99 0.98 318
13
+ BUILDINGNUMBER 0.97 0.95 0.96 396
14
+ CITY 0.98 0.98 0.98 329
15
+ COUNTY 0.99 1.00 1.00 382
16
+ CREDITCARD 0.90 0.96 0.93 382
17
+ CREDITCARDISSUER 1.00 1.00 1.00 210
18
+ CURRENCY 0.69 0.84 0.76 210
19
+ CURRENCYCODE 0.99 0.98 0.99 106
20
+ CURRENCYNAME 0.45 0.36 0.40 109
21
+ CURRENCYSYMBOL 0.99 0.99 0.99 355
22
+ CVV 0.99 1.00 0.99 83
23
+ DATE 0.79 0.93 0.85 598
24
+ DATEOFBIRTH 0.84 0.65 0.74 404
25
+ EMAIL 1.00 1.00 1.00 495
26
+ ETHEREUMADDRESS 1.00 1.00 1.00 236
27
+ EYECOLOR 1.00 1.00 1.00 162
28
+ FIRSTNAME 0.99 0.98 0.99 1927
29
+ GENDER 1.00 1.00 1.00 412
30
+ GPSCOORDINATES 1.00 1.00 1.00 300
31
+ HEIGHT 0.99 1.00 1.00 155
32
+ IBAN 1.00 1.00 1.00 273
33
+ IMEI 1.00 1.00 1.00 304
34
+ IPADDRESS 1.00 1.00 1.00 992
35
+ JOBDEPARTMENT 0.98 1.00 0.99 336
36
+ JOBTITLE 1.00 1.00 1.00 329
37
+ LASTNAME 0.98 0.98 0.98 585
38
+ LITECOINADDRESS 0.96 0.90 0.93 110
39
+ MACADDRESS 1.00 1.00 1.00 145
40
+ MASKEDNUMBER 0.95 0.87 0.91 302
41
+ MIDDLENAME 0.93 0.99 0.96 374
42
+ OCCUPATION 1.00 0.99 1.00 382
43
+ ORDINALDIRECTION 1.00 1.00 1.00 185
44
+ ORGANIZATION 1.00 1.00 1.00 322
45
+ PASSWORD 1.00 1.00 1.00 393
46
+ PHONE 1.00 1.00 1.00 341
47
+ PIN 1.00 1.00 1.00 83
48
+ PREFIX 0.99 0.99 0.99 391
49
+ SECONDARYADDRESS 1.00 1.00 1.00 357
50
+ SEX 1.00 1.00 1.00 422
51
+ SSN 1.00 1.00 1.00 331
52
+ STATE 0.99 1.00 1.00 348
53
+ STREET 0.99 0.99 0.99 409
54
+ TIME 1.00 0.99 0.99 348
55
+ URL 1.00 1.00 1.00 364
56
+ USERAGENT 0.99 1.00 1.00 295
57
+ USERNAME 1.00 1.00 1.00 348
58
+ VIN 1.00 0.99 1.00 113
59
+ VRM 0.98 1.00 0.99 125
60
+ ZIPCODE 0.95 0.97 0.96 346
61
+
62
+ micro avg 0.97 0.97 0.97 18485
63
+ macro avg 0.97 0.97 0.97 18485
64
+ weighted avg 0.97 0.97 0.97 18485
config.json ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "XLMRobertaForTokenClassification"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "bos_token_id": 0,
7
+ "classifier_dropout": null,
8
+ "dtype": "float32",
9
+ "eos_token_id": 2,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 1024,
13
+ "id2label": {
14
+ "0": "O",
15
+ "1": "B-ACCOUNTNAME",
16
+ "2": "B-AGE",
17
+ "3": "B-AMOUNT",
18
+ "4": "B-BANKACCOUNT",
19
+ "5": "B-BIC",
20
+ "6": "B-BITCOINADDRESS",
21
+ "7": "B-BUILDINGNUMBER",
22
+ "8": "B-CITY",
23
+ "9": "B-COUNTY",
24
+ "10": "B-CREDITCARD",
25
+ "11": "B-CREDITCARDISSUER",
26
+ "12": "B-CURRENCY",
27
+ "13": "B-CURRENCYCODE",
28
+ "14": "B-CURRENCYNAME",
29
+ "15": "B-CURRENCYSYMBOL",
30
+ "16": "B-CVV",
31
+ "17": "B-DATE",
32
+ "18": "B-DATEOFBIRTH",
33
+ "19": "B-EMAIL",
34
+ "20": "B-ETHEREUMADDRESS",
35
+ "21": "B-EYECOLOR",
36
+ "22": "B-FIRSTNAME",
37
+ "23": "B-GENDER",
38
+ "24": "B-GPSCOORDINATES",
39
+ "25": "B-HEIGHT",
40
+ "26": "B-IBAN",
41
+ "27": "B-IMEI",
42
+ "28": "B-IPADDRESS",
43
+ "29": "B-JOBDEPARTMENT",
44
+ "30": "B-JOBTITLE",
45
+ "31": "B-LASTNAME",
46
+ "32": "B-LITECOINADDRESS",
47
+ "33": "B-MACADDRESS",
48
+ "34": "B-MASKEDNUMBER",
49
+ "35": "B-MIDDLENAME",
50
+ "36": "B-OCCUPATION",
51
+ "37": "B-ORDINALDIRECTION",
52
+ "38": "B-ORGANIZATION",
53
+ "39": "B-PASSWORD",
54
+ "40": "B-PHONE",
55
+ "41": "B-PIN",
56
+ "42": "B-PREFIX",
57
+ "43": "B-SECONDARYADDRESS",
58
+ "44": "B-SEX",
59
+ "45": "B-SSN",
60
+ "46": "B-STATE",
61
+ "47": "B-STREET",
62
+ "48": "B-TIME",
63
+ "49": "B-URL",
64
+ "50": "B-USERAGENT",
65
+ "51": "B-USERNAME",
66
+ "52": "B-VIN",
67
+ "53": "B-VRM",
68
+ "54": "B-ZIPCODE",
69
+ "55": "I-ACCOUNTNAME",
70
+ "56": "I-AGE",
71
+ "57": "I-AMOUNT",
72
+ "58": "I-CITY",
73
+ "59": "I-COUNTY",
74
+ "60": "I-CURRENCY",
75
+ "61": "I-CURRENCYNAME",
76
+ "62": "I-DATE",
77
+ "63": "I-DATEOFBIRTH",
78
+ "64": "I-EYECOLOR",
79
+ "65": "I-GENDER",
80
+ "66": "I-HEIGHT",
81
+ "67": "I-JOBTITLE",
82
+ "68": "I-ORGANIZATION",
83
+ "69": "I-PHONE",
84
+ "70": "I-SECONDARYADDRESS",
85
+ "71": "I-SSN",
86
+ "72": "I-STATE",
87
+ "73": "I-STREET",
88
+ "74": "I-TIME",
89
+ "75": "I-USERAGENT"
90
+ },
91
+ "initializer_range": 0.02,
92
+ "intermediate_size": 4096,
93
+ "label2id": {
94
+ "B-ACCOUNTNAME": 1,
95
+ "B-AGE": 2,
96
+ "B-AMOUNT": 3,
97
+ "B-BANKACCOUNT": 4,
98
+ "B-BIC": 5,
99
+ "B-BITCOINADDRESS": 6,
100
+ "B-BUILDINGNUMBER": 7,
101
+ "B-CITY": 8,
102
+ "B-COUNTY": 9,
103
+ "B-CREDITCARD": 10,
104
+ "B-CREDITCARDISSUER": 11,
105
+ "B-CURRENCY": 12,
106
+ "B-CURRENCYCODE": 13,
107
+ "B-CURRENCYNAME": 14,
108
+ "B-CURRENCYSYMBOL": 15,
109
+ "B-CVV": 16,
110
+ "B-DATE": 17,
111
+ "B-DATEOFBIRTH": 18,
112
+ "B-EMAIL": 19,
113
+ "B-ETHEREUMADDRESS": 20,
114
+ "B-EYECOLOR": 21,
115
+ "B-FIRSTNAME": 22,
116
+ "B-GENDER": 23,
117
+ "B-GPSCOORDINATES": 24,
118
+ "B-HEIGHT": 25,
119
+ "B-IBAN": 26,
120
+ "B-IMEI": 27,
121
+ "B-IPADDRESS": 28,
122
+ "B-JOBDEPARTMENT": 29,
123
+ "B-JOBTITLE": 30,
124
+ "B-LASTNAME": 31,
125
+ "B-LITECOINADDRESS": 32,
126
+ "B-MACADDRESS": 33,
127
+ "B-MASKEDNUMBER": 34,
128
+ "B-MIDDLENAME": 35,
129
+ "B-OCCUPATION": 36,
130
+ "B-ORDINALDIRECTION": 37,
131
+ "B-ORGANIZATION": 38,
132
+ "B-PASSWORD": 39,
133
+ "B-PHONE": 40,
134
+ "B-PIN": 41,
135
+ "B-PREFIX": 42,
136
+ "B-SECONDARYADDRESS": 43,
137
+ "B-SEX": 44,
138
+ "B-SSN": 45,
139
+ "B-STATE": 46,
140
+ "B-STREET": 47,
141
+ "B-TIME": 48,
142
+ "B-URL": 49,
143
+ "B-USERAGENT": 50,
144
+ "B-USERNAME": 51,
145
+ "B-VIN": 52,
146
+ "B-VRM": 53,
147
+ "B-ZIPCODE": 54,
148
+ "I-ACCOUNTNAME": 55,
149
+ "I-AGE": 56,
150
+ "I-AMOUNT": 57,
151
+ "I-CITY": 58,
152
+ "I-COUNTY": 59,
153
+ "I-CURRENCY": 60,
154
+ "I-CURRENCYNAME": 61,
155
+ "I-DATE": 62,
156
+ "I-DATEOFBIRTH": 63,
157
+ "I-EYECOLOR": 64,
158
+ "I-GENDER": 65,
159
+ "I-HEIGHT": 66,
160
+ "I-JOBTITLE": 67,
161
+ "I-ORGANIZATION": 68,
162
+ "I-PHONE": 69,
163
+ "I-SECONDARYADDRESS": 70,
164
+ "I-SSN": 71,
165
+ "I-STATE": 72,
166
+ "I-STREET": 73,
167
+ "I-TIME": 74,
168
+ "I-USERAGENT": 75,
169
+ "O": 0
170
+ },
171
+ "layer_norm_eps": 1e-05,
172
+ "max_position_embeddings": 8194,
173
+ "model_type": "xlm-roberta",
174
+ "num_attention_heads": 16,
175
+ "num_hidden_layers": 24,
176
+ "output_past": true,
177
+ "pad_token_id": 1,
178
+ "position_embedding_type": "absolute",
179
+ "transformers_version": "4.57.3",
180
+ "type_vocab_size": 1,
181
+ "use_cache": true,
182
+ "vocab_size": 250002
183
+ }
eval_results.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_accuracy": 0.9961240086517664,
4
+ "eval_f1": 0.9726980003788187,
5
+ "eval_loss": 0.01059788279235363,
6
+ "eval_macro_f1": 0.9661009444248277,
7
+ "eval_precision": 0.971042679632631,
8
+ "eval_recall": 0.9743589743589743,
9
+ "eval_runtime": 6.0584,
10
+ "eval_samples_per_second": 1024.523,
11
+ "eval_steps_per_second": 16.011,
12
+ "eval_weighted_f1": 0.972552632037373
13
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74f9ff66a5420fb7c5bf4dc0534e68cc4b7a9840fa1a8a13c439177bd9feb567
3
+ size 2267180744
sentencepiece.bpe.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfc8146abe2a0488e9e2a0c56de7952f7c11ab059eca145a0a727afce0db2865
3
+ size 5069051
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
test_results.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "test_accuracy": 0.9962841708002117,
3
+ "test_f1": 0.9733174894674301,
4
+ "test_loss": 0.010280761867761612,
5
+ "test_macro_f1": 0.9666768552091859,
6
+ "test_precision": 0.9717952866310737,
7
+ "test_recall": 0.9748444684879632,
8
+ "test_runtime": 5.4192,
9
+ "test_samples_per_second": 1138.721,
10
+ "test_steps_per_second": 17.899,
11
+ "test_weighted_f1": 0.972977336746876
12
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33cd99e33ce09bdd8a6136fddfe90a1c47f85bafedf7309d0eecc19012d43586
3
+ size 17082897
tokenizer_config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "250001": {
36
+ "content": "<mask>",
37
+ "lstrip": true,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "bos_token": "<s>",
45
+ "clean_up_tokenization_spaces": true,
46
+ "cls_token": "<s>",
47
+ "eos_token": "</s>",
48
+ "extra_special_tokens": {},
49
+ "mask_token": "<mask>",
50
+ "model_max_length": 8192,
51
+ "pad_token": "<pad>",
52
+ "sep_token": "</s>",
53
+ "sp_model_kwargs": {},
54
+ "tokenizer_class": "XLMRobertaTokenizer",
55
+ "unk_token": "<unk>"
56
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "total_flos": 1.647057060233216e+16,
4
+ "train_loss": 0.1105563929324509,
5
+ "train_runtime": 942.4834,
6
+ "train_samples_per_second": 157.817,
7
+ "train_steps_per_second": 2.467
8
+ }