Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

contextboxai
/
halong_embedding

Sentence Similarity
sentence-transformers
Safetensors
Vietnamese
English
xlm-roberta
feature-extraction
Generated from Trainer
loss:MatryoshkaLoss
loss:MultipleNegativesRankingLoss
Eval Results (legacy)
text-embeddings-inference
Model card Files Files and versions
xet
Community

Instructions to use contextboxai/halong_embedding with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use contextboxai/halong_embedding with sentence-transformers:

    from sentence_transformers import SentenceTransformer
    
    model = SentenceTransformer("contextboxai/halong_embedding")
    
    sentences = [
        "Bóng đá có lợi ích gì cho sức khỏe?",
        "Bóng đá giúp cải thiện sức khỏe tim mạch và tăng cường sức bền.",
        "Bóng đá là môn thể thao phổ biến nhất thế giới.",
        "Bóng đá có thể giúp bạn kết nối với nhiều người hơn."
    ]
    embeddings = model.encode(sentences)
    
    similarities = model.similarity(embeddings, embeddings)
    print(similarities.shape)
    # [4, 4]
  • Inference
  • Notebooks
  • Google Colab
  • Kaggle
halong_embedding
1.13 GB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 9 commits
hiieu's picture
hiieu
Update README.md
b577760 verified over 1 year ago
  • 1_Pooling
    Add new SentenceTransformer model. almost 2 years ago
  • .gitattributes
    1.57 kB
    Add new SentenceTransformer model. almost 2 years ago
  • README.md
    13.9 kB
    Update README.md over 1 year ago
  • config.json
    749 Bytes
    Add new SentenceTransformer model. almost 2 years ago
  • config_sentence_transformers.json
    201 Bytes
    Add new SentenceTransformer model. almost 2 years ago
  • model.safetensors
    1.11 GB
    xet
    Add new SentenceTransformer model. almost 2 years ago
  • modules.json
    349 Bytes
    Add new SentenceTransformer model. almost 2 years ago
  • sentence_bert_config.json
    53 Bytes
    Add new SentenceTransformer model. almost 2 years ago
  • special_tokens_map.json
    964 Bytes
    Add new SentenceTransformer model. almost 2 years ago
  • tokenizer.json
    17.1 MB
    xet
    Add new SentenceTransformer model. almost 2 years ago
  • tokenizer_config.json
    1.34 kB
    Add new SentenceTransformer model. almost 2 years ago