SentenceTransformer based on am-azadi/UAE-Large-V1_Fine_Tuned

This is a sentence-transformers model finetuned from am-azadi/UAE-Large-V1_Fine_Tuned. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: am-azadi/UAE-Large-V1_Fine_Tuned
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'Look what a show Pope Francis gave in yesterday\'s homily / sermon! It\'s to be read and reread over and over again... This is the most spiritual Pope since Peter. "You may have flaws, be anxious, and sometimes live irritated, but don\'t forget that your life is the greatest company in the world. Only you can prevent it from going into decline. Many appreciate you, admire you and love you. remember that being happy is not having a sky without storms, a road without accidents, work without fatigue, relationships without disappointments. Being happy is finding strength in forgiveness, hope in battles, security in the stage of fear, love in discord. It\'s not just appreciating the smile, but also reflecting on sadness. It\'s not just celebrating successes, but learning lessons from failures. It\'s not just feeling happy with applause, but being happy in anonymity. Being happy is recognizing that it\'s worth life is worth living, despite all the challenges, misunderstandings, periods of crisis. Being happy is not a fatality of fate, but an achievement for those who manage to travel within themselves. To be happy is to stop feeling like a victim of problems and become the author of his own story . It\'s crossing deserts outside of yourself, but managing to find an oasis in the depths of our soul. It is to thank God for each morning, for the miracle of life. Being happy is not being afraid of your own feelings. It\'s knowing how to talk about yourself. It\'s having the courage to hear a "no". It\'s feeling safe when receiving criticism, even if unfair. It\'s kissing the children, pampering the parents, living poetic moments with friends, even when they hurt us. To be happy is to let the creature that lives in each of us live, free, joyful and simple. It\'s having maturity to be able to say: "I was wrong". It\'s having the courage to say, "I\'m sorry". It\'s having the sensitivity to say: "I need you". It\'s having the ability to say, "I love you". May your life become a garden of opportunities to be happy... May your springtime be a lover of joy. May in your winters be a lover of wisdom. And that when you make a mistake, start over from the beginning. For only then will you be in love with life. You will discover that being happy is not having a perfect life. But using tears to irrigate tolerance. Use losses to train patience. Using mistakes to sculpt serenity. Using pain to cut pleasure. Use obstacles to open intelligence windows. Never give up....Never give up on the people who love you. Never give up happiness, for life is an incredible spectacle." (Pope Francis).',
    '"The message that Pope Francis gave in yesterday\'s homily/sermon! It is to be read and reread several times... What an admirable man!"',
    'Denmark allows Muslim women to wear the niqab',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 25,743 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 2 tokens
    • mean: 100.7 tokens
    • max: 512 tokens
    • min: 4 tokens
    • mean: 17.62 tokens
    • max: 179 tokens
    • min: 1.0
    • mean: 1.0
    • max: 1.0
  • Samples:
    sentence_0 sentence_1 label
    best music k.m KOSE CELLIE HINS GUINOT SKIN CARE KWhat people fear most is not being physically disabled, but giving up on themselves. There are still many beautiful things in life to aspire to! This stunning performance, known as the American spirit, brought tears to the eyes of 10,000 spectators. Male dancer Babo has been blind since childhood due to a fire in his home. In order to protect him, his mother held him tightly in her arms and jumped from the 7th floor. The mother died as a result, and the little baby became blind due to bleeding from the fundus. His mother was an ice skater before he died, and Babo also had a soft spot for ice skating. Although he couldn't see anything, he still pursued dance enthusiastically. He danced the famous tango "La Cumparsita" with his partner at the World Figure Skating Championships in Helsinki! 1. His ears are like bats that can measure the sound and distance around him. 2. The female dancer is very amazing. She danced with him and led him for... Performance by a blind American ice dancer 1.0
    Photo from 2016. "Good" times when health was "fine" and the press did not report anything about. Bunch of Hypocrites...Let's go fight my people... . left right not army above all Photo of a hospital in 2016. Good times when health was "good" and the press didn't report anything about it 1.0
    Haifa Oh Tel Aviv-Yafo Oh N WEST BANK Jerusalem is GAZA STRIPE Be'er Sheva Israel 65 65 35 35 15 M5 10 40Google and Apple maps have officially removed Palestine from the World Maps. Today Palestine was erased from the maps tomorrow Palestine will be erased from the world. PUT PALESTINE BACK ON THE MAP. Please unite now Pakistanio. Enemy is very strong if we are divided. Think just about Pakistan. Support each other, support Pakistan and support your leadership. Google and Apple removed Palestine from its maps 1.0
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 2
  • per_device_eval_batch_size: 2
  • num_train_epochs: 1
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 2
  • per_device_eval_batch_size: 2
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
0.0388 500 0.0173
0.0777 1000 0.0124
0.1165 1500 0.0127
0.1554 2000 0.0256
0.1942 2500 0.0123
0.2331 3000 0.0199
0.2719 3500 0.0079
0.3108 4000 0.0134
0.3496 4500 0.0127
0.3884 5000 0.026
0.4273 5500 0.0314
0.4661 6000 0.0267
0.5050 6500 0.0145
0.5438 7000 0.0093
0.5827 7500 0.007
0.6215 8000 0.0071
0.6603 8500 0.0116
0.6992 9000 0.0085
0.7380 9500 0.0157
0.7769 10000 0.0051
0.8157 10500 0.0101
0.8546 11000 0.0174
0.8934 11500 0.0116
0.9323 12000 0.0073
0.9711 12500 0.0146
0.0388 500 0.0115
0.0777 1000 0.0083
0.1165 1500 0.0287
0.1554 2000 0.0086
0.1942 2500 0.0157
0.2331 3000 0.0082
0.2719 3500 0.0116
0.3108 4000 0.0044
0.3496 4500 0.0158
0.3884 5000 0.0094
0.4273 5500 0.0087
0.4661 6000 0.0045
0.5050 6500 0.0139
0.5438 7000 0.0125
0.5827 7500 0.0196
0.6215 8000 0.0054
0.6603 8500 0.0061
0.6992 9000 0.0058
0.7380 9500 0.0243
0.7769 10000 0.0022
0.8157 10500 0.0083
0.8546 11000 0.0026
0.8934 11500 0.0036
0.9323 12000 0.0069
0.9711 12500 0.0071

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.4.1
  • Transformers: 4.48.3
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.3.1
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
2
Safetensors
Model size
335M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for am-azadi/UAE-Large-V1_Fine_Tuned_3e

Finetuned
(2)
this model