MPNet base trained on AllNLI triplets

This is a sentence-transformers model finetuned from prajjwal1/bert-tiny on the pair_similarity dataset. It maps sentences & paragraphs to a 128-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: prajjwal1/bert-tiny
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 128 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 128, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Tien09/tiny_bert_ft_sim_score")
# Run inference
sentences = [
    'If you control 3 or more face-up "Six Samurai" monsters, you can activate 1 of these effects: Destroy all face-up monsters your opponent controls. Destroy all face-up Spell/Trap Cards your opponent controls. Destroy all Set Spell/Trap Cards your opponent controls.',
    'Target 1 Link Monster you control and 1 monster your opponent controls; destroy them, then draw 1 card. You can only activate 1 "Link Burst" per turn.',
    'While you have 2 or less cards in your hand, all face-up "Fabled" monsters you control gain 400 ATK.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 128]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

pair_similarity

  • Dataset: pair_similarity at a933de4
  • Size: 8,959 training samples
  • Columns: effect_text, score, and effect_text2
  • Approximate statistics based on the first 1000 samples:
    effect_text score effect_text2
    type string float string
    details
    • min: 6 tokens
    • mean: 72.39 tokens
    • max: 191 tokens
    • min: 0.0
    • mean: 0.09
    • max: 1.0
    • min: 8 tokens
    • mean: 72.1 tokens
    • max: 198 tokens
  • Samples:
    effect_text score effect_text2
    Once per turn, if you Special Summon a DARK Synchro Monster(s) from the Extra Deck: You can target 1 of your "Blackwing" monsters, or "Black-Winged Dragon", with lower ATK that is banished or in your GY; Special Summon it. Once per turn, if a DARK monster(s) you control would be destroyed by battle or card effect, you can remove 1 Black Feather Counter from your field instead. 0.0 A Millennium item, it's rumored to block any strong attack.
    Target 1 face-up monster your opponent controls; the ATK of all other monsters currently on the field become equal to that monster's ATK, until the end of this turn. 0.0 While you control a "Blue-Eyes" monster, you choose the attack targets for your opponent's attacks. You can only use each of the following effects of "Dictator of D." once per turn. You can send 1 "Blue-Eyes White Dragon" from your hand or Deck to the GY; Special Summon this card from your hand. You can discard 1 "Blue-Eyes White Dragon", or 1 card that mentions it, then target 1 "Blue-Eyes" monster in your GY; Special Summon it.
    1 Tuner + 1+ non-Tuner monsters

    If this card is Synchro Summoned using a Tuner Synchro Monster: You can target 1 Spell/Trap in your GY; add it to your hand. When your opponent activates a card or effect (Quick Effect): You can send 1 Spell/Trap from your hand or field to the GY; Special Summon 1 Level 7 or lower Tuner Synchro Monster from your Extra Deck, GY, or banishment. You can only use each effect of "Diabell, Queen of the White Forest" once per turn.
    0.2 1 Aqua monster + 1 Level 10 WATER monster
    Must first be either Fusion Summoned, or Special Summoned (from your Extra Deck) by Tributing 1 Level 10 Aqua monster with 0 ATK. This card can be treated as 3 Tributes for the Tribute Summon of a monster. Cannot be destroyed by battle. Your opponent cannot target monsters you control with card effects, except "Egyptian God Slime", also their monsters cannot target monsters for attacks, except "Egyptian God Slime".
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Evaluation Dataset

pair_similarity

  • Dataset: pair_similarity at a933de4
  • Size: 1,920 evaluation samples
  • Columns: effect_text, score, and effect_text2
  • Approximate statistics based on the first 1000 samples:
    effect_text score effect_text2
    type string float string
    details
    • min: 10 tokens
    • mean: 72.56 tokens
    • max: 202 tokens
    • min: 0.0
    • mean: 0.09
    • max: 1.0
    • min: 5 tokens
    • mean: 71.33 tokens
    • max: 186 tokens
  • Samples:
    effect_text score effect_text2
    A proud ruler of the jungle that some fear and others respect. 0.0 Cannot attack the turn it is Normal Summoned. Once per turn: You can target 1 face-up monster on the field; change this card to Defense Position, and if you do, that target loses 800 ATK until the end of this turn.
    During your opponent's Main Phase or Battle Phase: You can Special Summon 1 non-Tuner monster from your hand, but it has its effects negated (if any), and if you do, immediately after this effect resolves, Synchro Summon 1 Machine-Type Synchro Monster using only that monster and this card (this is a Quick Effect). You can only use this effect of "Crystron Quan" once per turn. 0.0 You can Tribute this card while "Neo Space" is on the field to Special Summon 1 "Neo-Spacian Dark Panther" from your hand or Deck.
    When your opponent Special Summons a monster(s): Destroy it, then you can banish 5 Zombie monsters from your GY, and if you do, Special Summon 1 Level 7 or higher Zombie monster from your hand or Deck. 0.25 You can target 1 Dragon monster you control; it gains ATK/DEF equal to the total Link Rating of the Link Monsters currently on the field x 100, until the end of the opponent's turn. You can only use this effect of "Guardragon Shield" once per turn. Once per turn, if exactly 1 Dragon monster you control would be destroyed by battle or card effect, you can send 1 Normal Monster from your hand or Deck to the GY instead.
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 5
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss
0.1786 100 3.8917 3.7898
0.3571 200 3.7289 3.7576
0.5357 300 3.6719 3.7211
0.7143 400 3.6294 3.6751
0.8929 500 3.5188 3.6291
1.0714 600 3.6794 3.5768
1.25 700 3.4962 3.5798
1.4286 800 3.4325 3.6149
1.6071 900 3.3956 3.6151
1.7857 1000 3.2907 3.7533
1.9643 1100 3.3685 3.5106
2.1429 1200 3.502 3.4844
2.3214 1300 3.3796 3.6363
2.5 1400 3.2383 3.5744
2.6786 1500 3.1346 3.6568
2.8571 1600 3.1808 3.6278
3.0357 1700 3.3241 3.4786
3.2143 1800 3.2864 3.4705
3.3929 1900 3.2056 3.5290
3.5714 2000 3.1519 3.6228
3.75 2100 3.0889 3.5919
3.9286 2200 2.9385 3.6148
4.1071 2300 3.2051 3.5180
4.2857 2400 3.2581 3.5216
4.4643 2500 3.0765 3.5968
4.6429 2600 2.9497 3.6496
4.8214 2700 2.8502 3.6804
5.0 2800 3.1919 3.6668

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.3.1
  • Transformers: 4.47.1
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.2.1
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

CoSENTLoss

@online{kexuefm-8847,
    title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
    author={Su Jianlin},
    year={2022},
    month={Jan},
    url={https://kexue.fm/archives/8847},
}
Downloads last month
4
Safetensors
Model size
4.39M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Tien09/tiny_bert_ft_sim_score

Finetuned
(58)
this model

Dataset used to train Tien09/tiny_bert_ft_sim_score