MPNet base trained on AllNLI triplets

This is a sentence-transformers model finetuned from prajjwal1/bert-tiny on the pair_similarity_new_1231 dataset. It maps sentences & paragraphs to a 128-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: prajjwal1/bert-tiny
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 128 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 128, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Tien09/tiny_bert_ft_sim_score_1231_1")
# Run inference
sentences = [
    'This card cannot be Normal Summoned or Set. This card cannot be Special Summoned except by the effect of "The First Sarcophagus". When this card is Special Summoned, you can Special Summon up to 4 Level 2 or lower Zombie-Type Normal Monsters from your GY.',
    'Target 1 Level 4 or lower "Magistus" monster in your GY; Special Summon it. If a "Magistus" card(s) in your Spell & Trap Zone would be destroyed by your opponent\'s card effect, you can banish this card from your GY instead. You can only use each effect of "Magistus Vritra" once per turn.',
    "When this card is Normal Summoned, you can remove from play 1 Psychic-Type monster from your Deck. This card's Level becomes the Level of that monster.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 128]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

pair_similarity_new_1231

  • Dataset: pair_similarity_new_1231 at 757b53a
  • Size: 8,959 training samples
  • Columns: effect_text, score, and effect_text2
  • Approximate statistics based on the first 1000 samples:
    effect_text score effect_text2
    type string float string
    details
    • min: 9 tokens
    • mean: 73.57 tokens
    • max: 204 tokens
    • min: 0.0
    • mean: 0.42
    • max: 1.0
    • min: 8 tokens
    • mean: 73.28 tokens
    • max: 181 tokens
  • Samples:
    effect_text score effect_text2
    When your opponent's monster attacks a face-up Level 4 or lower Toon Monster on your side of the field, you can make the attack a direct attack to your Life Points. 0.0 Cannot be used as material for a Fusion, Synchro, or Xyz Summon. Cannot be Tributed while face-up in the Monster Zone. If this card is sent to the GY as material for a Link Summon: Special Summon this card in Defense Position, to the opponent's field of the player that Link Summoned. You can only use this effect of "Click & Echo" twice per turn. While this card, that was Summoned by its effect, is in the Monster Zone, you must keep your hand revealed.
    When your opponent Special Summons a monster, you can discard 1 card to Special Summon this card from your hand. Your opponent cannot remove cards from play. 0.0 Once per turn you can place 2 Venom Counters on 1 monster your opponent controls. If you activate this effect, this card cannot attack during this turn.
    Mystical Elf" + "Curtain of the Dark Ones 0.0 If you Normal or Special Summon a "U.A." monster(s) (except during the Damage Step): You can Special Summon this card from your hand. If this card is Special Summoned: You can activate 1 of these effects.

    ● Target 1 card on the field; destroy it.

    ● Negate the effects of all face-up monsters on the field until the end of this turn, except "U.A." monsters.

    You can only use each effect of "U.A. Player Manager" once per turn.
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Evaluation Dataset

pair_similarity_new_1231

  • Dataset: pair_similarity_new_1231 at 757b53a
  • Size: 1,920 evaluation samples
  • Columns: effect_text, score, and effect_text2
  • Approximate statistics based on the first 1000 samples:
    effect_text score effect_text2
    type string float string
    details
    • min: 6 tokens
    • mean: 72.29 tokens
    • max: 190 tokens
    • min: 0.0
    • mean: 0.42
    • max: 1.0
    • min: 5 tokens
    • mean: 70.39 tokens
    • max: 219 tokens
  • Samples:
    effect_text score effect_text2
    2+ Level 4 monsters
    This Xyz Summoned card gains 500 ATK x the total Link Rating of Link Monsters linked to this card. You can detach 2 materials from this card, then target 1 4 Cyberse Link Monster in your GY; Special Summon it to your field so it points to this card, also you cannot Special Summon other monsters or attack directly for the rest of this turn.
    1.0 3 Level 4 monsters Once per turn, you can also Xyz Summon "Zoodiac Tigermortar" by using 1 "Zoodiac" monster you control with a different name as Xyz Material. (If you used an Xyz Monster, any Xyz Materials attached to it also become Xyz Materials on this card.) This card gains ATK and DEF equal to the ATK and DEF of all "Zoodiac" monsters attached to it as Materials. Once per turn: You can detach 1 Xyz Material from this card, then target 1 Xyz Monster you control and 1 "Zoodiac" monster in your GY; attach that "Zoodiac" monster to that Xyz Monster as Xyz Material.
    1 Tuner + 1 or more non-Tuner Pendulum Monsters Once per turn: You can target 1 Pendulum Monster on the field or 1 card in the Pendulum Zone; destroy it, and if you do, shuffle 1 card on the field into the Deck. Once per turn: You can Special Summon 1 "Dracoslayer" monster from your Deck in Defense Position, but it cannot be used as a Synchro Material for a Summon. 0.5 You can Ritual Summon this card with a "Recipe" card. If this card is Special Summoned: You can target 1 Spell/Trap on the field; destroy it. When a card or effect is activated that targets this card on the field, or when this card is targeted for an attack (Quick Effect): You can Tribute this card and 1 Attack Position monster on either field, and if you do, Special Summon 1 Level 3 or 4 "Nouvelles" Ritual Monster from your hand or Deck. You can only use each effect of "Confiras de Nouvelles" once per turn.
    If you control an Illusion or Spellcaster monster: Add 1 "White Forest" monster from your Deck to your hand. If this card is sent to the GY to activate a monster effect: You can Set this card. You can only use each effect of "Tales of the White Forest" once per turn. 0.0 Once per turn, when your opponent activates a Trap Card, you can destroy the Trap Card and inflict 800 damage to your opponent.
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 2e-05
  • num_train_epochs: 15
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 15
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss
0.1786 100 4.6161 4.4244
0.3571 200 4.5391 4.3765
0.5357 300 4.3857 4.3268
0.7143 400 4.4125 4.2893
0.8929 500 4.2914 4.2755
1.0714 600 4.3029 4.2674
1.25 700 4.3832 4.2538
1.4286 800 4.2629 4.2410
1.6071 900 4.2343 4.2204
1.7857 1000 4.3121 4.2025
1.9643 1100 4.1853 4.1866
2.1429 1200 4.257 4.1712
2.3214 1300 4.269 4.1560
2.5 1400 4.1065 4.1373
2.6786 1500 4.1499 4.1204
2.8571 1600 4.1191 4.1044
3.0357 1700 4.0988 4.0989
3.2143 1800 4.1788 4.0736
3.3929 1900 4.0597 4.0633
3.5714 2000 4.0105 4.0565
3.75 2100 4.1035 4.0299
3.9286 2200 3.963 4.0527
4.1071 2300 4.0127 4.0191
4.2857 2400 4.0932 3.9967
4.4643 2500 3.9348 3.9900
4.6429 2600 3.9643 3.9798
4.8214 2700 3.9502 3.9671
5.0 2800 3.8734 3.9682
5.1786 2900 3.9211 3.9837
5.3571 3000 3.9833 3.9463
5.5357 3100 3.805 3.9531
5.7143 3200 3.9045 3.9353
5.8929 3300 3.7978 3.9654
6.0714 3400 3.8802 3.9545
6.25 3500 3.9052 3.9242
6.4286 3600 3.8237 3.9042
6.6071 3700 3.7338 3.9315
6.7857 3800 3.855 3.9185
6.9643 3900 3.7611 3.9310
7.1429 4000 3.8459 3.9072
7.3214 4100 3.8968 3.8727
7.5 4200 3.6306 3.9094
7.6786 4300 3.7761 3.8921
7.8571 4400 3.728 3.8924
8.0357 4500 3.7182 3.8869
8.2143 4600 3.7695 3.9223
8.3929 4700 3.7255 3.8472
8.5714 4800 3.6354 3.8880
8.75 4900 3.7751 3.8574
8.9286 5000 3.646 3.8901
9.1071 5100 3.7268 3.8599
9.2857 5200 3.7616 3.8432
9.4643 5300 3.6173 3.8403
9.6429 5400 3.6365 3.8573
9.8214 5500 3.6667 3.8416
10.0 5600 3.6456 3.8467
10.1786 5700 3.6096 3.8817
10.3571 5800 3.7093 3.8397
10.5357 5900 3.4805 3.8649
10.7143 6000 3.6849 3.8437
10.8929 6100 3.57 3.8676
11.0714 6200 3.6915 3.8439
11.25 6300 3.6812 3.8451
11.4286 6400 3.5948 3.8374
11.6071 6500 3.5601 3.8342
11.7857 6600 3.6627 3.8348
11.9643 6700 3.5013 3.8493
12.1429 6800 3.6723 3.8404
12.3214 6900 3.6744 3.8312
12.5 7000 3.521 3.8233
12.6786 7100 3.5399 3.8336
12.8571 7200 3.5862 3.8304
13.0357 7300 3.5598 3.8357
13.2143 7400 3.6152 3.8446
13.3929 7500 3.6303 3.8178
13.5714 7600 3.4542 3.8314
13.75 7700 3.6197 3.8265
13.9286 7800 3.4931 3.8349
14.1071 7900 3.6109 3.8310
14.2857 8000 3.6087 3.8320
14.4643 8100 3.5136 3.8299
14.6429 8200 3.5176 3.8318
14.8214 8300 3.607 3.8286
15.0 8400 3.5206 3.8300

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.3.1
  • Transformers: 4.47.1
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.2.1
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

CoSENTLoss

@online{kexuefm-8847,
    title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
    author={Su Jianlin},
    year={2022},
    month={Jan},
    url={https://kexue.fm/archives/8847},
}
Downloads last month
2
Safetensors
Model size
4.39M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Tien09/tiny_bert_ft_sim_score_1231_1

Finetuned
(58)
this model

Dataset used to train Tien09/tiny_bert_ft_sim_score_1231_1