MaziyarPanahi commited on
Commit
2c664f6
·
verified ·
1 Parent(s): e5f4f5e

Upload Italian PII detection model OpenMed-PII-Italian-ClinicalLongformer-Base-149M-v1

Browse files
README.md ADDED
@@ -0,0 +1,305 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - it
4
+ license: apache-2.0
5
+ base_model: yikuan8/Clinical-Longformer
6
+ tags:
7
+ - token-classification
8
+ - ner
9
+ - pii
10
+ - pii-detection
11
+ - de-identification
12
+ - privacy
13
+ - healthcare
14
+ - medical
15
+ - clinical
16
+ - phi
17
+ - italian
18
+ - pytorch
19
+ - transformers
20
+ - openmed
21
+ pipeline_tag: token-classification
22
+ library_name: transformers
23
+ metrics:
24
+ - f1
25
+ - precision
26
+ - recall
27
+ model-index:
28
+ - name: OpenMed-PII-Italian-ClinicalLongformer-149M-v1
29
+ results:
30
+ - task:
31
+ type: token-classification
32
+ name: Named Entity Recognition
33
+ dataset:
34
+ name: AI4Privacy (Italian subset)
35
+ type: ai4privacy/pii-masking-400k
36
+ split: test
37
+ metrics:
38
+ - type: f1
39
+ value: 0.9514
40
+ name: F1 (micro)
41
+ - type: precision
42
+ value: 0.9489
43
+ name: Precision
44
+ - type: recall
45
+ value: 0.9540
46
+ name: Recall
47
+ widget:
48
+ - text: "Dr. Marco Rossi (Codice Fiscale: RSSMRC85C15H501Z) può essere contattato a marco.rossi@ospedale.it o al +39 333 123 4567. Abita in Via Roma 25, 00184 Roma."
49
+ example_title: Clinical Note with PII (Italian)
50
+ ---
51
+
52
+ # OpenMed-PII-Italian-ClinicalLongformer-149M-v1
53
+
54
+ **Italian PII Detection Model** | 149M Parameters | Open Source
55
+
56
+ [![F1 Score](https://img.shields.io/badge/F1-95.14%25-brightgreen)]() [![Precision](https://img.shields.io/badge/Precision-94.89%25-blue)]() [![Recall](https://img.shields.io/badge/Recall-95.40%25-orange)]()
57
+
58
+ ## Model Description
59
+
60
+ **OpenMed-PII-Italian-ClinicalLongformer-149M-v1** is a transformer-based token classification model fine-tuned for **Personally Identifiable Information (PII) detection in Italian text**. This model identifies and classifies **54 types of sensitive information** including names, addresses, social security numbers, medical record numbers, and more.
61
+
62
+ ### Key Features
63
+
64
+ - **Italian-Optimized**: Specifically trained on Italian text for optimal performance
65
+ - **High Accuracy**: Achieves strong F1 scores across diverse PII categories
66
+ - **Comprehensive Coverage**: Detects 55+ entity types spanning personal, financial, medical, and contact information
67
+ - **Privacy-Focused**: Designed for de-identification and compliance with GDPR and other privacy regulations
68
+ - **Production-Ready**: Optimized for real-world text processing pipelines
69
+
70
+ ## Performance
71
+
72
+ Evaluated on the Italian subset of AI4Privacy dataset:
73
+
74
+ | Metric | Score |
75
+ |:---|:---:|
76
+ | **Micro F1** | **0.9514** |
77
+ | Precision | 0.9489 |
78
+ | Recall | 0.9540 |
79
+ | Macro F1 | 0.9344 |
80
+ | Weighted F1 | 0.9478 |
81
+ | Accuracy | 0.9926 |
82
+
83
+ ### Top 10 Italian PII Models
84
+
85
+ | Rank | Model | F1 | Precision | Recall |
86
+ |:---:|:---|:---:|:---:|:---:|
87
+ | 1 | [OpenMed-PII-Italian-SuperClinical-Large-434M-v1](https://huggingface.co/OpenMed/OpenMed-PII-Italian-SuperClinical-Large-434M-v1) | 0.9728 | 0.9707 | 0.9750 |
88
+ | 2 | [OpenMed-PII-Italian-EuroMed-210M-v1](https://huggingface.co/OpenMed/OpenMed-PII-Italian-EuroMed-210M-v1) | 0.9685 | 0.9663 | 0.9707 |
89
+ | 3 | [OpenMed-PII-Italian-ClinicalBGE-568M-v1](https://huggingface.co/OpenMed/OpenMed-PII-Italian-ClinicalBGE-568M-v1) | 0.9678 | 0.9653 | 0.9703 |
90
+ | 4 | [OpenMed-PII-Italian-SnowflakeMed-Large-568M-v1](https://huggingface.co/OpenMed/OpenMed-PII-Italian-SnowflakeMed-Large-568M-v1) | 0.9678 | 0.9653 | 0.9702 |
91
+ | 5 | [OpenMed-PII-Italian-BigMed-Large-560M-v1](https://huggingface.co/OpenMed/OpenMed-PII-Italian-BigMed-Large-560M-v1) | 0.9671 | 0.9645 | 0.9697 |
92
+ | 6 | [OpenMed-PII-Italian-SuperMedical-Large-355M-v1](https://huggingface.co/OpenMed/OpenMed-PII-Italian-SuperMedical-Large-355M-v1) | 0.9663 | 0.9640 | 0.9686 |
93
+ | 7 | [OpenMed-PII-Italian-mClinicalE5-Large-560M-v1](https://huggingface.co/OpenMed/OpenMed-PII-Italian-mClinicalE5-Large-560M-v1) | 0.9659 | 0.9633 | 0.9684 |
94
+ | 8 | [OpenMed-PII-Italian-NomicMed-Large-395M-v1](https://huggingface.co/OpenMed/OpenMed-PII-Italian-NomicMed-Large-395M-v1) | 0.9656 | 0.9631 | 0.9682 |
95
+ | 9 | [OpenMed-PII-Italian-ClinicalBGE-Large-335M-v1](https://huggingface.co/OpenMed/OpenMed-PII-Italian-ClinicalBGE-Large-335M-v1) | 0.9605 | 0.9575 | 0.9635 |
96
+ | 10 | [OpenMed-PII-Italian-SuperClinical-Base-184M-v1](https://huggingface.co/OpenMed/OpenMed-PII-Italian-SuperClinical-Base-184M-v1) | 0.9596 | 0.9573 | 0.9620 |
97
+
98
+ ## Supported Entity Types
99
+
100
+ This model detects **54 PII entity types** organized into categories:
101
+
102
+ <details>
103
+ <summary><strong>Identifiers</strong> (22 types)</summary>
104
+
105
+ | Entity | Description |
106
+ |:---|:---|
107
+ | `ACCOUNTNAME` | Accountname |
108
+ | `BANKACCOUNT` | Bankaccount |
109
+ | `BIC` | Bic |
110
+ | `BITCOINADDRESS` | Bitcoinaddress |
111
+ | `CREDITCARD` | Creditcard |
112
+ | `CREDITCARDISSUER` | Creditcardissuer |
113
+ | `CVV` | Cvv |
114
+ | `ETHEREUMADDRESS` | Ethereumaddress |
115
+ | `IBAN` | Iban |
116
+ | `IMEI` | Imei |
117
+ | ... | *and 12 more* |
118
+
119
+ </details>
120
+
121
+ <details>
122
+ <summary><strong>Personal Info</strong> (11 types)</summary>
123
+
124
+ | Entity | Description |
125
+ |:---|:---|
126
+ | `AGE` | Age |
127
+ | `DATEOFBIRTH` | Dateofbirth |
128
+ | `EYECOLOR` | Eyecolor |
129
+ | `FIRSTNAME` | Firstname |
130
+ | `GENDER` | Gender |
131
+ | `HEIGHT` | Height |
132
+ | `LASTNAME` | Lastname |
133
+ | `MIDDLENAME` | Middlename |
134
+ | `OCCUPATION` | Occupation |
135
+ | `PREFIX` | Prefix |
136
+ | ... | *and 1 more* |
137
+
138
+ </details>
139
+
140
+ <details>
141
+ <summary><strong>Contact Info</strong> (2 types)</summary>
142
+
143
+ | Entity | Description |
144
+ |:---|:---|
145
+ | `EMAIL` | Email |
146
+ | `PHONE` | Phone |
147
+
148
+ </details>
149
+
150
+ <details>
151
+ <summary><strong>Location</strong> (9 types)</summary>
152
+
153
+ | Entity | Description |
154
+ |:---|:---|
155
+ | `BUILDINGNUMBER` | Buildingnumber |
156
+ | `CITY` | City |
157
+ | `COUNTY` | County |
158
+ | `GPSCOORDINATES` | Gpscoordinates |
159
+ | `ORDINALDIRECTION` | Ordinaldirection |
160
+ | `SECONDARYADDRESS` | Secondaryaddress |
161
+ | `STATE` | State |
162
+ | `STREET` | Street |
163
+ | `ZIPCODE` | Zipcode |
164
+
165
+ </details>
166
+
167
+ <details>
168
+ <summary><strong>Organization</strong> (3 types)</summary>
169
+
170
+ | Entity | Description |
171
+ |:---|:---|
172
+ | `JOBDEPARTMENT` | Jobdepartment |
173
+ | `JOBTITLE` | Jobtitle |
174
+ | `ORGANIZATION` | Organization |
175
+
176
+ </details>
177
+
178
+ <details>
179
+ <summary><strong>Financial</strong> (5 types)</summary>
180
+
181
+ | Entity | Description |
182
+ |:---|:---|
183
+ | `AMOUNT` | Amount |
184
+ | `CURRENCY` | Currency |
185
+ | `CURRENCYCODE` | Currencycode |
186
+ | `CURRENCYNAME` | Currencyname |
187
+ | `CURRENCYSYMBOL` | Currencysymbol |
188
+
189
+ </details>
190
+
191
+ <details>
192
+ <summary><strong>Temporal</strong> (2 types)</summary>
193
+
194
+ | Entity | Description |
195
+ |:---|:---|
196
+ | `DATE` | Date |
197
+ | `TIME` | Time |
198
+
199
+ </details>
200
+
201
+ ## Usage
202
+
203
+ ### Quick Start
204
+
205
+ ```python
206
+ from transformers import pipeline
207
+
208
+ # Load the PII detection pipeline
209
+ ner = pipeline("ner", model="OpenMed/OpenMed-PII-Italian-ClinicalLongformer-149M-v1", aggregation_strategy="simple")
210
+
211
+ text = """
212
+ Paziente Marco Bianchi (nato il 15/03/1985, CF: BNCMRC85C15H501Z) è stato visitato oggi.
213
+ Contatto: marco.bianchi@email.it, Telefono: +39 333 123 4567.
214
+ Indirizzo: Via Garibaldi 42, 20121 Milano.
215
+ """
216
+
217
+ entities = ner(text)
218
+ for entity in entities:
219
+ print(f"{entity['entity_group']}: {entity['word']} (score: {entity['score']:.3f})")
220
+ ```
221
+
222
+ ### De-identification Example
223
+
224
+ ```python
225
+ def redact_pii(text, entities, placeholder='[REDACTED]'):
226
+ """Replace detected PII with placeholders."""
227
+ # Sort entities by start position (descending) to preserve offsets
228
+ sorted_entities = sorted(entities, key=lambda x: x['start'], reverse=True)
229
+ redacted = text
230
+ for ent in sorted_entities:
231
+ redacted = redacted[:ent['start']] + f"[{ent['entity_group']}]" + redacted[ent['end']:]
232
+ return redacted
233
+
234
+ # Apply de-identification
235
+ redacted_text = redact_pii(text, entities)
236
+ print(redacted_text)
237
+ ```
238
+
239
+ ### Batch Processing
240
+
241
+ ```python
242
+ from transformers import AutoModelForTokenClassification, AutoTokenizer
243
+ import torch
244
+
245
+ model_name = "OpenMed/OpenMed-PII-Italian-ClinicalLongformer-149M-v1"
246
+ model = AutoModelForTokenClassification.from_pretrained(model_name)
247
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
248
+
249
+ texts = [
250
+ "Paziente Marco Bianchi (nato il 15/03/1985, CF: BNCMRC85C15H501Z) è stato visitato oggi.",
251
+ "Contatto: marco.bianchi@email.it, Telefono: +39 333 123 4567.",
252
+ ]
253
+
254
+ inputs = tokenizer(texts, return_tensors='pt', padding=True, truncation=True)
255
+ with torch.no_grad():
256
+ outputs = model(**inputs)
257
+ predictions = torch.argmax(outputs.logits, dim=-1)
258
+ ```
259
+
260
+ ## Training Details
261
+
262
+ ### Dataset
263
+
264
+ - **Source**: [AI4Privacy PII Masking 400k](https://huggingface.co/datasets/ai4privacy/pii-masking-400k) (Italian subset)
265
+ - **Format**: BIO-tagged token classification
266
+ - **Labels**: 109 total (54 entity types × 2 BIO tags + O)
267
+
268
+ ### Training Configuration
269
+
270
+ - **Max Sequence Length**: 512 tokens
271
+ - **Epochs**: 3
272
+ - **Framework**: Hugging Face Transformers + Trainer API
273
+
274
+ ## Intended Use & Limitations
275
+
276
+ ### Intended Use
277
+
278
+ - **De-identification**: Automated redaction of PII in Italian clinical notes, medical records, and documents
279
+ - **Compliance**: Supporting GDPR, and other privacy regulation compliance
280
+ - **Data Preprocessing**: Preparing datasets for research by removing sensitive information
281
+ - **Audit Support**: Identifying PII in document collections
282
+
283
+ ### Limitations
284
+
285
+ **Important**: This model is intended as an **assistive tool**, not a replacement for human review.
286
+
287
+ - **False Negatives**: Some PII may not be detected; always verify critical applications
288
+ - **Context Sensitivity**: Performance may vary with domain-specific terminology
289
+ - **Language**: Optimized for Italian text; may not perform well on other languages
290
+
291
+ ## Citation
292
+
293
+ ```bibtex
294
+ @misc{openmed-pii-2026,
295
+ title = {OpenMed-PII-Italian-ClinicalLongformer-149M-v1: Italian PII Detection Model},
296
+ author = {OpenMed Science},
297
+ year = {2026},
298
+ publisher = {Hugging Face},
299
+ url = {https://huggingface.co/OpenMed/OpenMed-PII-Italian-ClinicalLongformer-149M-v1}
300
+ }
301
+ ```
302
+
303
+ ## Links
304
+
305
+ - **Organization**: [OpenMed](https://huggingface.co/OpenMed)
all_results.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_accuracy": 0.9931973836090804,
4
+ "eval_f1": 0.9544668587896253,
5
+ "eval_loss": 0.01890326663851738,
6
+ "eval_macro_f1": 0.9387767324777618,
7
+ "eval_precision": 0.9521103896103896,
8
+ "eval_recall": 0.9568350214125484,
9
+ "eval_runtime": 11.9939,
10
+ "eval_samples_per_second": 414.543,
11
+ "eval_steps_per_second": 25.93,
12
+ "eval_weighted_f1": 0.9515783746375358,
13
+ "test_accuracy": 0.9926445979815408,
14
+ "test_f1": 0.9514421804710241,
15
+ "test_loss": 0.021018730476498604,
16
+ "test_macro_f1": 0.9343891623497261,
17
+ "test_precision": 0.9489311163895487,
18
+ "test_recall": 0.9539665693817989,
19
+ "test_runtime": 12.8599,
20
+ "test_samples_per_second": 394.171,
21
+ "test_steps_per_second": 24.65,
22
+ "test_weighted_f1": 0.9477791242822929,
23
+ "total_flos": 5868204157042688.0,
24
+ "train_loss": 0.17805063041547933,
25
+ "train_runtime": 1244.7065,
26
+ "train_samples_per_second": 98.684,
27
+ "train_steps_per_second": 1.543
28
+ }
classification_report.txt ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Classification Report for Italian PII Detection
2
+ Model: yikuan8/Clinical-Longformer
3
+ ============================================================
4
+
5
+ precision recall f1-score support
6
+
7
+ ACCOUNTNAME 0.99 1.00 0.99 282
8
+ AGE 0.97 0.99 0.98 338
9
+ AMOUNT 1.00 0.92 0.96 116
10
+ BANKACCOUNT 0.98 1.00 0.99 306
11
+ BIC 1.00 0.99 0.99 77
12
+ BITCOINADDRESS 0.91 0.99 0.95 273
13
+ BUILDINGNUMBER 0.92 0.94 0.93 346
14
+ CITY 0.97 0.94 0.95 280
15
+ COUNTY 0.97 0.99 0.98 327
16
+ CREDITCARD 0.78 0.81 0.79 302
17
+ CREDITCARDISSUER 1.00 1.00 1.00 146
18
+ CURRENCY 0.64 0.97 0.77 187
19
+ CURRENCYCODE 0.89 0.91 0.90 85
20
+ CURRENCYNAME 0.25 0.01 0.02 97
21
+ CURRENCYSYMBOL 0.96 0.97 0.97 308
22
+ CVV 0.96 0.97 0.96 97
23
+ DATE 0.68 0.95 0.80 423
24
+ DATEOFBIRTH 0.86 0.43 0.58 327
25
+ EMAIL 1.00 1.00 1.00 423
26
+ ETHEREUMADDRESS 1.00 1.00 1.00 168
27
+ EYECOLOR 0.98 1.00 0.99 108
28
+ FIRSTNAME 0.97 0.96 0.97 1623
29
+ GENDER 0.99 1.00 1.00 302
30
+ GPSCOORDINATES 1.00 1.00 1.00 223
31
+ HEIGHT 0.98 1.00 0.99 126
32
+ IBAN 1.00 1.00 1.00 230
33
+ IMEI 1.00 1.00 1.00 215
34
+ IPADDRESS 1.00 1.00 1.00 783
35
+ JOBDEPARTMENT 0.99 0.99 0.99 327
36
+ JOBTITLE 0.99 0.99 0.99 279
37
+ LASTNAME 0.96 0.94 0.95 441
38
+ LITECOINADDRESS 0.97 0.69 0.80 83
39
+ MACADDRESS 0.99 1.00 1.00 114
40
+ MASKEDNUMBER 0.71 0.66 0.68 209
41
+ MIDDLENAME 0.86 0.93 0.89 310
42
+ OCCUPATION 0.99 1.00 0.99 323
43
+ ORDINALDIRECTION 1.00 1.00 1.00 152
44
+ ORGANIZATION 0.99 1.00 0.99 271
45
+ PASSWORD 1.00 1.00 1.00 286
46
+ PHONE 1.00 0.99 1.00 303
47
+ PIN 0.93 0.88 0.90 72
48
+ PREFIX 0.97 1.00 0.99 298
49
+ SECONDARYADDRESS 0.99 1.00 1.00 316
50
+ SEX 1.00 1.00 1.00 338
51
+ SSN 0.99 1.00 1.00 259
52
+ STATE 0.96 0.99 0.97 294
53
+ STREET 0.97 0.98 0.98 332
54
+ TIME 0.96 1.00 0.98 296
55
+ URL 1.00 1.00 1.00 244
56
+ USERAGENT 1.00 1.00 1.00 233
57
+ USERNAME 0.99 0.99 0.99 332
58
+ VIN 0.99 1.00 0.99 84
59
+ VRM 0.98 1.00 0.99 98
60
+ ZIPCODE 0.94 0.92 0.93 264
61
+
62
+ micro avg 0.95 0.95 0.95 15076
63
+ macro avg 0.94 0.94 0.93 15076
64
+ weighted avg 0.95 0.95 0.95 15076
config.json ADDED
@@ -0,0 +1,200 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "LongformerForTokenClassification"
4
+ ],
5
+ "attention_mode": "longformer",
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "attention_window": [
8
+ 512,
9
+ 512,
10
+ 512,
11
+ 512,
12
+ 512,
13
+ 512,
14
+ 512,
15
+ 512,
16
+ 512,
17
+ 512,
18
+ 512,
19
+ 512
20
+ ],
21
+ "bos_token_id": 0,
22
+ "dtype": "float32",
23
+ "eos_token_id": 2,
24
+ "gradient_checkpointing": false,
25
+ "hidden_act": "gelu",
26
+ "hidden_dropout_prob": 0.1,
27
+ "hidden_size": 768,
28
+ "id2label": {
29
+ "0": "O",
30
+ "1": "B-ACCOUNTNAME",
31
+ "2": "B-AGE",
32
+ "3": "B-AMOUNT",
33
+ "4": "B-BANKACCOUNT",
34
+ "5": "B-BIC",
35
+ "6": "B-BITCOINADDRESS",
36
+ "7": "B-BUILDINGNUMBER",
37
+ "8": "B-CITY",
38
+ "9": "B-COUNTY",
39
+ "10": "B-CREDITCARD",
40
+ "11": "B-CREDITCARDISSUER",
41
+ "12": "B-CURRENCY",
42
+ "13": "B-CURRENCYCODE",
43
+ "14": "B-CURRENCYNAME",
44
+ "15": "B-CURRENCYSYMBOL",
45
+ "16": "B-CVV",
46
+ "17": "B-DATE",
47
+ "18": "B-DATEOFBIRTH",
48
+ "19": "B-EMAIL",
49
+ "20": "B-ETHEREUMADDRESS",
50
+ "21": "B-EYECOLOR",
51
+ "22": "B-FIRSTNAME",
52
+ "23": "B-GENDER",
53
+ "24": "B-GPSCOORDINATES",
54
+ "25": "B-HEIGHT",
55
+ "26": "B-IBAN",
56
+ "27": "B-IMEI",
57
+ "28": "B-IPADDRESS",
58
+ "29": "B-JOBDEPARTMENT",
59
+ "30": "B-JOBTITLE",
60
+ "31": "B-LASTNAME",
61
+ "32": "B-LITECOINADDRESS",
62
+ "33": "B-MACADDRESS",
63
+ "34": "B-MASKEDNUMBER",
64
+ "35": "B-MIDDLENAME",
65
+ "36": "B-OCCUPATION",
66
+ "37": "B-ORDINALDIRECTION",
67
+ "38": "B-ORGANIZATION",
68
+ "39": "B-PASSWORD",
69
+ "40": "B-PHONE",
70
+ "41": "B-PIN",
71
+ "42": "B-PREFIX",
72
+ "43": "B-SECONDARYADDRESS",
73
+ "44": "B-SEX",
74
+ "45": "B-SSN",
75
+ "46": "B-STATE",
76
+ "47": "B-STREET",
77
+ "48": "B-TIME",
78
+ "49": "B-URL",
79
+ "50": "B-USERAGENT",
80
+ "51": "B-USERNAME",
81
+ "52": "B-VIN",
82
+ "53": "B-VRM",
83
+ "54": "B-ZIPCODE",
84
+ "55": "I-ACCOUNTNAME",
85
+ "56": "I-AGE",
86
+ "57": "I-AMOUNT",
87
+ "58": "I-CITY",
88
+ "59": "I-COUNTY",
89
+ "60": "I-CURRENCY",
90
+ "61": "I-CURRENCYNAME",
91
+ "62": "I-DATE",
92
+ "63": "I-DATEOFBIRTH",
93
+ "64": "I-EYECOLOR",
94
+ "65": "I-GENDER",
95
+ "66": "I-HEIGHT",
96
+ "67": "I-JOBTITLE",
97
+ "68": "I-ORGANIZATION",
98
+ "69": "I-PHONE",
99
+ "70": "I-SECONDARYADDRESS",
100
+ "71": "I-SSN",
101
+ "72": "I-STATE",
102
+ "73": "I-STREET",
103
+ "74": "I-TIME",
104
+ "75": "I-USERAGENT"
105
+ },
106
+ "ignore_attention_mask": false,
107
+ "initializer_range": 0.02,
108
+ "intermediate_size": 3072,
109
+ "label2id": {
110
+ "B-ACCOUNTNAME": 1,
111
+ "B-AGE": 2,
112
+ "B-AMOUNT": 3,
113
+ "B-BANKACCOUNT": 4,
114
+ "B-BIC": 5,
115
+ "B-BITCOINADDRESS": 6,
116
+ "B-BUILDINGNUMBER": 7,
117
+ "B-CITY": 8,
118
+ "B-COUNTY": 9,
119
+ "B-CREDITCARD": 10,
120
+ "B-CREDITCARDISSUER": 11,
121
+ "B-CURRENCY": 12,
122
+ "B-CURRENCYCODE": 13,
123
+ "B-CURRENCYNAME": 14,
124
+ "B-CURRENCYSYMBOL": 15,
125
+ "B-CVV": 16,
126
+ "B-DATE": 17,
127
+ "B-DATEOFBIRTH": 18,
128
+ "B-EMAIL": 19,
129
+ "B-ETHEREUMADDRESS": 20,
130
+ "B-EYECOLOR": 21,
131
+ "B-FIRSTNAME": 22,
132
+ "B-GENDER": 23,
133
+ "B-GPSCOORDINATES": 24,
134
+ "B-HEIGHT": 25,
135
+ "B-IBAN": 26,
136
+ "B-IMEI": 27,
137
+ "B-IPADDRESS": 28,
138
+ "B-JOBDEPARTMENT": 29,
139
+ "B-JOBTITLE": 30,
140
+ "B-LASTNAME": 31,
141
+ "B-LITECOINADDRESS": 32,
142
+ "B-MACADDRESS": 33,
143
+ "B-MASKEDNUMBER": 34,
144
+ "B-MIDDLENAME": 35,
145
+ "B-OCCUPATION": 36,
146
+ "B-ORDINALDIRECTION": 37,
147
+ "B-ORGANIZATION": 38,
148
+ "B-PASSWORD": 39,
149
+ "B-PHONE": 40,
150
+ "B-PIN": 41,
151
+ "B-PREFIX": 42,
152
+ "B-SECONDARYADDRESS": 43,
153
+ "B-SEX": 44,
154
+ "B-SSN": 45,
155
+ "B-STATE": 46,
156
+ "B-STREET": 47,
157
+ "B-TIME": 48,
158
+ "B-URL": 49,
159
+ "B-USERAGENT": 50,
160
+ "B-USERNAME": 51,
161
+ "B-VIN": 52,
162
+ "B-VRM": 53,
163
+ "B-ZIPCODE": 54,
164
+ "I-ACCOUNTNAME": 55,
165
+ "I-AGE": 56,
166
+ "I-AMOUNT": 57,
167
+ "I-CITY": 58,
168
+ "I-COUNTY": 59,
169
+ "I-CURRENCY": 60,
170
+ "I-CURRENCYNAME": 61,
171
+ "I-DATE": 62,
172
+ "I-DATEOFBIRTH": 63,
173
+ "I-EYECOLOR": 64,
174
+ "I-GENDER": 65,
175
+ "I-HEIGHT": 66,
176
+ "I-JOBTITLE": 67,
177
+ "I-ORGANIZATION": 68,
178
+ "I-PHONE": 69,
179
+ "I-SECONDARYADDRESS": 70,
180
+ "I-SSN": 71,
181
+ "I-STATE": 72,
182
+ "I-STREET": 73,
183
+ "I-TIME": 74,
184
+ "I-USERAGENT": 75,
185
+ "O": 0
186
+ },
187
+ "layer_norm_eps": 1e-05,
188
+ "max_position_embeddings": 4098,
189
+ "model_type": "longformer",
190
+ "num_attention_heads": 12,
191
+ "num_hidden_layers": 12,
192
+ "onnx_export": false,
193
+ "pad_token_id": 1,
194
+ "position_embedding_type": "absolute",
195
+ "sep_token_id": 2,
196
+ "transformers_version": "4.57.3",
197
+ "type_vocab_size": 1,
198
+ "use_cache": true,
199
+ "vocab_size": 50265
200
+ }
eval_results.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_accuracy": 0.9931973836090804,
4
+ "eval_f1": 0.9544668587896253,
5
+ "eval_loss": 0.01890326663851738,
6
+ "eval_macro_f1": 0.9387767324777618,
7
+ "eval_precision": 0.9521103896103896,
8
+ "eval_recall": 0.9568350214125484,
9
+ "eval_runtime": 11.9939,
10
+ "eval_samples_per_second": 414.543,
11
+ "eval_steps_per_second": 25.93,
12
+ "eval_weighted_f1": 0.9515783746375358
13
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52a42e7431b186e7a7feb7ab644faa7dc28b3ef62c1829785265bd94e7a4bef3
3
+ size 592543232
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
test_results.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "test_accuracy": 0.9926445979815408,
3
+ "test_f1": 0.9514421804710241,
4
+ "test_loss": 0.021018730476498604,
5
+ "test_macro_f1": 0.9343891623497261,
6
+ "test_precision": 0.9489311163895487,
7
+ "test_recall": 0.9539665693817989,
8
+ "test_runtime": 12.8599,
9
+ "test_samples_per_second": 394.171,
10
+ "test_steps_per_second": 24.65,
11
+ "test_weighted_f1": 0.9477791242822929
12
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": true,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<s>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<pad>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "2": {
21
+ "content": "</s>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "3": {
29
+ "content": "<unk>",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "50264": {
37
+ "content": "<mask>",
38
+ "lstrip": true,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ }
44
+ },
45
+ "bos_token": "<s>",
46
+ "clean_up_tokenization_spaces": false,
47
+ "cls_token": "<s>",
48
+ "eos_token": "</s>",
49
+ "errors": "replace",
50
+ "extra_special_tokens": {},
51
+ "mask_token": "<mask>",
52
+ "model_max_length": 4096,
53
+ "pad_token": "<pad>",
54
+ "sep_token": "</s>",
55
+ "tokenizer_class": "LongformerTokenizer",
56
+ "trim_offsets": true,
57
+ "unk_token": "<unk>"
58
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "total_flos": 5868204157042688.0,
4
+ "train_loss": 0.17805063041547933,
5
+ "train_runtime": 1244.7065,
6
+ "train_samples_per_second": 98.684,
7
+ "train_steps_per_second": 1.543
8
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff