SentenceTransformer based on thenlper/gte-small

This is a sentence-transformers model finetuned from thenlper/gte-small. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: thenlper/gte-small
  • Maximum Sequence Length: 128 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("redis/model-b-structured")
# Run inference
sentences = [
    'Schliemann recognized five shafts and cleared them like the graves mentioned by Pausanias .',
    'Schliemann recognized five shafts and cleared them like the graves mentioned by Pausanias .',
    'Schliemann cleared five shafts and recognized them as the graves mentioned by Pausania .',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 1.0000, 0.9779],
#         [1.0000, 1.0000, 0.9779],
#         [0.9779, 0.9779, 1.0000]])

Evaluation

Metrics

Information Retrieval

Metric NanoMSMARCO NanoNQ
cosine_accuracy@1 0.3 0.38
cosine_accuracy@3 0.54 0.52
cosine_accuracy@5 0.62 0.54
cosine_accuracy@10 0.76 0.68
cosine_precision@1 0.3 0.38
cosine_precision@3 0.18 0.1733
cosine_precision@5 0.124 0.112
cosine_precision@10 0.076 0.072
cosine_recall@1 0.3 0.35
cosine_recall@3 0.54 0.49
cosine_recall@5 0.62 0.52
cosine_recall@10 0.76 0.66
cosine_ndcg@10 0.5241 0.5018
cosine_mrr@10 0.4493 0.4686
cosine_map@100 0.4578 0.4586

Nano BEIR

  • Dataset: NanoBEIR_mean
  • Evaluated with NanoBEIREvaluator with these parameters:
    {
        "dataset_names": [
            "msmarco",
            "nq"
        ],
        "dataset_id": "lightonai/NanoBEIR-en"
    }
    
Metric Value
cosine_accuracy@1 0.34
cosine_accuracy@3 0.53
cosine_accuracy@5 0.58
cosine_accuracy@10 0.72
cosine_precision@1 0.34
cosine_precision@3 0.1767
cosine_precision@5 0.118
cosine_precision@10 0.074
cosine_recall@1 0.325
cosine_recall@3 0.515
cosine_recall@5 0.57
cosine_recall@10 0.71
cosine_ndcg@10 0.5129
cosine_mrr@10 0.4589
cosine_map@100 0.4582

Training Details

Training Dataset

Unnamed Dataset

  • Size: 111,470 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 10.95 tokens
    • max: 60 tokens
    • min: 6 tokens
    • mean: 67.57 tokens
    • max: 128 tokens
    • min: 7 tokens
    • mean: 66.64 tokens
    • max: 128 tokens
  • Samples:
    anchor positive negative
    how far is sandos caracol eco resort from cancun airport The Sandos Caracol Eco Resort is 2 miles from the Church of Guadalupe and a 45-minute drive from Cancun Cancún. Airport The Gran Coral Golf Riviera maya is located within the same estate as The. Sandos we speak your! Language Hotel: rooms, 680 Hotel: Chain Sandos & Hotels. resorts Featuring a spa, 8 restaurants and 2 outdoor pools, Sandos Caracol Eco Resort is set on Playa del Carmen Beach, overlooking Cozumel Island. Its rooms have balconies overlooking the Caribbean Sea. Sandos Caracol Eco Resort is in beautiful gardens and features bright accommodations.
    can eggs expire Here is a link from Georgia Eggs Commission about eggs and expiration dates. The following is from Swedish Medical Center Eggs: If you ve purchased a carton of eggs before the date expires, you should be able to use them safely for three to five weeks after expiration.ere is a link from Georgia Eggs Commission about eggs and expiration dates. The following is from Swedish Medical Center Eggs: If you ve purchased a carton of eggs before the date expires, you should be able to use them safely for three to five weeks after expiration. The answer to this question may surprise you: while uncooked eggs typically last four to five weeks when properly refrigerated, hard-boiled eggs will only last about a week. This is because egg shells, which are highly porous, are sprayed before sale with a thin coating of mineral oil that seals the egg.
    how old are first graders? First Grade Worksheets Online. 6 and 7 year old kids get their first taste of real schooling in first grade. Help children learn the basics in math, reading, language and science with our printable first grade worksheets. Spelling Worksheets for 1st Grade. Average BMI percentile-for-age values were 59.5 (28.8) for first-graders, 59.5 (30.5) for third-graders, and 62.4 (31.7) for fifth-graders. The number of participants classified as obese was 144 (25.6% of first-graders, 28.5% of third-graders, and 34.5% of fifth-graders). The percentage of students who reported a reasonable height or weight ranged from 20% (first grade, height) to 92% (fifth grade, weight) (Table). In general, self-report ability was better in older children and when self-reporting weight.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 7.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 12,386 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 11.11 tokens
    • max: 66 tokens
    • min: 7 tokens
    • mean: 67.99 tokens
    • max: 128 tokens
    • min: 7 tokens
    • mean: 66.08 tokens
    • max: 128 tokens
  • Samples:
    anchor positive negative
    In 1883 , the first schools were built in the vicinity for 400 white and 60 black students . In 1883 , the first schools were built in the vicinity for 400 white and 60 black students . In 1883 , the first schools in the area were built for 400 black students and 60 white students .
    what is the origin of the name haja Haja is a Muslim baby Girl name, it is an Arabic originated name. Haja name meaning is In the heart condition through and the lucky number associated with Haja is 5. Find all the relevant details about the Haja Meaning, Origin, Lucky Number and Religion from this page. Average rating of Haja is 1 stars, based on 0 reviews. Synonomis with the exclamation commonly used in urban circles Holla. Haba is derived from the term, Holla Bitches, which became Haba Litches, which eventually evolved to Habalicious, and finally became just Haba. When seeing a fine female passing by, Russell exclaimed, Haba.
    what causes itch rash A rash is a noticeable change in the texture or color of the skin. The skin may become itchy, bumpy, chapped, scaly, or otherwise irritated. Rashes are caused by a wide range of conditions, including allergies, medication, cosmetics, and various diseases. The rash is often reddish and itchy, with a scaly texture. 2 bug bites: tick bites are of particular concern, as they can transmit disease. 3 psoriasis: a scaly, itchy, red rash that forms along the scalp and joints. 4 dandruff: an itchy, flaky rash on the scalp. Causes of Similar Symptoms to Behind knee rash. Research the causes of these symptoms that are similar to, or related to, the symptom Behind knee rash: 1 Behind knee itch (14 causes). 2 Knee rash (18 causes).3 Knee pain (122 causes). 4 Knee tingling (6 causes). 5 Knee symptoms (149 causes). 6 Skin itch (1068 causes). 7 Skin rash (461 causes). 8 Insect bite.auses of Similar Symptoms to Behind knee rash. Research the causes of these symptoms that are similar to, or related to, the symptom Behind knee rash: 1 Behind knee itch (14 causes). 2 Knee rash (18 causes).
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 7.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • learning_rate: 1e-06
  • weight_decay: 0.001
  • max_steps: 3000
  • warmup_ratio: 0.1
  • fp16: True
  • dataloader_drop_last: True
  • dataloader_num_workers: 1
  • dataloader_prefetch_factor: 1
  • load_best_model_at_end: True
  • optim: adamw_torch
  • ddp_find_unused_parameters: False
  • push_to_hub: True
  • hub_model_id: redis/model-b-structured
  • eval_on_start: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 1e-06
  • weight_decay: 0.001
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3.0
  • max_steps: 3000
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: True
  • dataloader_num_workers: 1
  • dataloader_prefetch_factor: 1
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: False
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: True
  • resume_from_checkpoint: None
  • hub_model_id: redis/model-b-structured
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: True
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss Validation Loss NanoMSMARCO_cosine_ndcg@10 NanoNQ_cosine_ndcg@10 NanoBEIR_mean_cosine_ndcg@10
0 0 - 4.0678 0.6259 0.6583 0.6421
0.2874 250 4.2246 3.8520 0.6117 0.6465 0.6291
0.5747 500 3.8138 3.1367 0.6062 0.6457 0.6260
0.8621 750 2.9174 1.8442 0.5837 0.5594 0.5715
1.1494 1000 1.8256 1.2096 0.5462 0.4989 0.5226
1.4368 1250 1.4465 1.0779 0.5347 0.4650 0.4998
1.7241 1500 1.3307 1.0331 0.5358 0.4801 0.5079
2.0115 1750 1.2785 1.0094 0.5359 0.4848 0.5104
2.2989 2000 1.249 0.9957 0.5282 0.4860 0.5071
2.5862 2250 1.228 0.9865 0.5245 0.4939 0.5092
2.8736 2500 1.2043 0.9809 0.5235 0.5018 0.5126
3.1609 2750 1.208 0.9771 0.5261 0.5018 0.5139
3.4483 3000 1.2008 0.9762 0.5241 0.5018 0.5129

Framework Versions

  • Python: 3.10.18
  • Sentence Transformers: 5.2.0
  • Transformers: 4.57.3
  • PyTorch: 2.9.1+cu128
  • Accelerate: 1.12.0
  • Datasets: 2.21.0
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
276
Safetensors
Model size
33.4M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for redis/model-b-structured

Base model

thenlper/gte-small
Finetuned
(21)
this model

Papers for redis/model-b-structured

Evaluation results