PII & De-Identification
Collection
Models for extracting PII entities and de-identifying clinical text, with support for HIPAA and GDPR compliance.
•
146 items
•
Updated
•
32
German PII Detection Model | 560M Parameters | Open Source
OpenMed-PII-German-mClinicalE5-Large-560M-v1 is a transformer-based token classification model fine-tuned for Personally Identifiable Information (PII) detection in German text. This model identifies and classifies 54 types of sensitive information including names, addresses, social security numbers, medical record numbers, and more.
Evaluated on the German subset of AI4Privacy dataset:
| Metric | Score |
|---|---|
| Micro F1 | 0.9617 |
| Precision | 0.9582 |
| Recall | 0.9653 |
| Macro F1 | 0.9458 |
| Weighted F1 | 0.9599 |
| Accuracy | 0.9940 |
| Rank | Model | F1 | Precision | Recall |
|---|---|---|---|---|
| 1 | OpenMed-PII-German-SuperClinical-Large-434M-v1 | 0.9761 | 0.9744 | 0.9778 |
| 2 | OpenMed-PII-German-SnowflakeMed-Large-568M-v1 | 0.9724 | 0.9705 | 0.9743 |
| 3 | OpenMed-PII-German-ClinicalBGE-568M-v1 | 0.9724 | 0.9702 | 0.9745 |
| 4 | OpenMed-PII-German-BigMed-Large-560M-v1 | 0.9714 | 0.9696 | 0.9732 |
| 5 | OpenMed-PII-German-NomicMed-Large-395M-v1 | 0.9713 | 0.9690 | 0.9735 |
| 6 | OpenMed-PII-German-SuperMedical-Large-355M-v1 | 0.9701 | 0.9684 | 0.9719 |
| 7 | OpenMed-PII-German-EuroMed-210M-v1 | 0.9683 | 0.9667 | 0.9699 |
| 8 | OpenMed-PII-German-ClinicalBGE-Large-335M-v1 | 0.9652 | 0.9624 | 0.9680 |
| 9 | OpenMed-PII-German-ClinicalE5-Large-335M-v1 | 0.9646 | 0.9620 | 0.9672 |
| 10 | OpenMed-PII-German-BiomedELECTRA-Large-335M-v1 | 0.9638 | 0.9598 | 0.9677 |
This model detects 54 PII entity types organized into categories:
| Entity | Description |
|---|---|
ACCOUNTNAME |
Accountname |
BANKACCOUNT |
Bankaccount |
BIC |
Bic |
BITCOINADDRESS |
Bitcoinaddress |
CREDITCARD |
Creditcard |
CREDITCARDISSUER |
Creditcardissuer |
CVV |
Cvv |
ETHEREUMADDRESS |
Ethereumaddress |
IBAN |
Iban |
IMEI |
Imei |
| ... | and 12 more |
| Entity | Description |
|---|---|
AGE |
Age |
DATEOFBIRTH |
Dateofbirth |
EYECOLOR |
Eyecolor |
FIRSTNAME |
Firstname |
GENDER |
Gender |
HEIGHT |
Height |
LASTNAME |
Lastname |
MIDDLENAME |
Middlename |
OCCUPATION |
Occupation |
PREFIX |
Prefix |
| ... | and 1 more |
| Entity | Description |
|---|---|
EMAIL |
|
PHONE |
Phone |
| Entity | Description |
|---|---|
BUILDINGNUMBER |
Buildingnumber |
CITY |
City |
COUNTY |
County |
GPSCOORDINATES |
Gpscoordinates |
ORDINALDIRECTION |
Ordinaldirection |
SECONDARYADDRESS |
Secondaryaddress |
STATE |
State |
STREET |
Street |
ZIPCODE |
Zipcode |
| Entity | Description |
|---|---|
JOBDEPARTMENT |
Jobdepartment |
JOBTITLE |
Jobtitle |
ORGANIZATION |
Organization |
| Entity | Description |
|---|---|
AMOUNT |
Amount |
CURRENCY |
Currency |
CURRENCYCODE |
Currencycode |
CURRENCYNAME |
Currencyname |
CURRENCYSYMBOL |
Currencysymbol |
| Entity | Description |
|---|---|
DATE |
Date |
TIME |
Time |
from transformers import pipeline
# Load the PII detection pipeline
ner = pipeline("ner", model="OpenMed/OpenMed-PII-German-mClinicalE5-Large-560M-v1", aggregation_strategy="simple")
text = """
Patient Hans Schmidt (geboren am 15.03.1985, SVN: 12 150385 M 234) wurde heute untersucht.
Kontakt: hans.schmidt@email.de, Telefon: 0171 234 5678.
Adresse: Mozartstraße 15, 80336 München.
"""
entities = ner(text)
for entity in entities:
print(f"{entity['entity_group']}: {entity['word']} (score: {entity['score']:.3f})")
def redact_pii(text, entities, placeholder='[REDACTED]'):
"""Replace detected PII with placeholders."""
# Sort entities by start position (descending) to preserve offsets
sorted_entities = sorted(entities, key=lambda x: x['start'], reverse=True)
redacted = text
for ent in sorted_entities:
redacted = redacted[:ent['start']] + f"[{ent['entity_group']}]" + redacted[ent['end']:]
return redacted
# Apply de-identification
redacted_text = redact_pii(text, entities)
print(redacted_text)
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch
model_name = "OpenMed/OpenMed-PII-German-mClinicalE5-Large-560M-v1"
model = AutoModelForTokenClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
texts = [
"Patient Hans Schmidt (geboren am 15.03.1985, SVN: 12 150385 M 234) wurde heute untersucht.",
"Kontakt: hans.schmidt@email.de, Telefon: 0171 234 5678.",
]
inputs = tokenizer(texts, return_tensors='pt', padding=True, truncation=True)
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.argmax(outputs.logits, dim=-1)
Important: This model is intended as an assistive tool, not a replacement for human review.
@misc{openmed-pii-2026,
title = {OpenMed-PII-German-mClinicalE5-Large-560M-v1: German PII Detection Model},
author = {OpenMed Science},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/OpenMed/OpenMed-PII-German-mClinicalE5-Large-560M-v1}
}
Base model
intfloat/multilingual-e5-large-instruct