SentenceTransformer based on Snowflake/snowflake-arctic-embed-l

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-l
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("ashwinpatti/finetuned_arctic_kg_ft-legal-ft-v0")
# Run inference
sentences = [
    'How has the approach to run chases in the IPL changed from 2019 to 2024?',
    'Three slips and a gullySubscribeSign inShare this postThree slips and a gullyWhat makes a successful run chase in the IPLCopy linkFacebookEmailNotesMoreWhat makes a successful run chase in the IPLA look at the way teams have been chasing targets in the IPL since 2019, and how 2024 was just a tad bit different in the way teams approach run chases.Divyansh PeswaniJan 09, 20254Share this postThree slips and a gullyWhat makes a successful run chase in the IPLCopy linkFacebookEmailNotesMore1ShareT20 batting has two sides to it; the calculations of putting up a first-innings total that could be considered above par for the given conditions, and the complexities of structuring the second innings chase across the innings to bag a win safely',
    'batters by bowling line-length combinations they’re the most conservative against.Thanks for reading Three slips and a gully! This post is public so feel free to share it.ShareSuryakumar Yadav is an absolute beast in T20 cricket. Although in a lean patch right now, he is potentially the only cricketer that will go down as an all-time great because of his brilliance in only one format, the 20 over game. He, like most Indian batters, struggles a bit against SLA, but still fares better than most of his contemporaries. He’s conservative against the straight-on SLAOs, bowled at the stumps from a good length. As the bowler drifts his line away from the stumps, he finds himself to have more room, and his striking ability improves as the ball gets',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.6786
cosine_accuracy@3 0.8571
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.6786
cosine_precision@3 0.2857
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.6786
cosine_recall@3 0.8571
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.8465
cosine_mrr@10 0.7958
cosine_map@100 0.7958

Information Retrieval

Metric Value
cosine_accuracy@1 0.4808
cosine_accuracy@3 0.75
cosine_accuracy@5 0.8462
cosine_accuracy@10 1.0
cosine_precision@1 0.4808
cosine_precision@3 0.25
cosine_precision@5 0.1692
cosine_precision@10 0.1
cosine_recall@1 0.4808
cosine_recall@3 0.75
cosine_recall@5 0.8462
cosine_recall@10 1.0
cosine_ndcg@10 0.7193
cosine_mrr@10 0.6311
cosine_map@100 0.6311

Training Details

Training Dataset

Unnamed Dataset

  • Size: 56 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 56 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 10 tokens
    • mean: 18.35 tokens
    • max: 31 tokens
    • min: 12 tokens
    • mean: 159.24 tokens
    • max: 187 tokens
  • Samples:
    sentence_0 sentence_1
    What is important in cricket matchups? Three slips and a gullySubscribeSign inShare this postThree slips and a gullyThe lines and lengths are trying to tell us somethingCopy linkFacebookEmailNotesMoreThe lines and lengths are trying to tell us somethingTaking a closer at line-length combinations used against different batters to see if there's more than what meets the eyeDivyansh PeswaniFeb 02, 202510Share this postThree slips and a gullyThe lines and lengths are trying to tell us somethingCopy linkFacebookEmailNotesMore2ShareMatchups across all forms of cricket are predominant. They take different forms, and are incorporated within gameday strategy differently, but the thought process behind a bowling line-up is to bowl deliveries least suitable to a batter’s playing style.
    Who is Divyansh Peswani? Three slips and a gullySubscribeSign inShare this postThree slips and a gullyThe lines and lengths are trying to tell us somethingCopy linkFacebookEmailNotesMoreThe lines and lengths are trying to tell us somethingTaking a closer at line-length combinations used against different batters to see if there's more than what meets the eyeDivyansh PeswaniFeb 02, 202510Share this postThree slips and a gullyThe lines and lengths are trying to tell us somethingCopy linkFacebookEmailNotesMore2ShareMatchups across all forms of cricket are predominant. They take different forms, and are incorporated within gameday strategy differently, but the thought process behind a bowling line-up is to bowl deliveries least suitable to a batter’s playing style.
    Can you explain how OBs affect players like Virat Kohli in cricket? right-arm off-break all too well, etc. Data around batter-specific matchups is now readily available. For example, Rishabh Pant finds it hard to score against right-arm express quicks (averaging 19 striking at 130), Virat Kohli is extremely cautious batting against SLAs and OBs, striking at 110 and 111 against them respectively.Some batters may not dominate every bowling style, but they consistently perform decently and deliver sizeable returns against most types of bowlers. To understand how to effectively challenge these players, we can analyze specific combinations of line and length that bowlers use against them. By delving deeper into these patterns, we can identify the precise deliveries that are most effective in restricting their
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_ndcg@10
1.0 6 0.7848
2.0 12 0.8365
3.0 18 0.8539
4.0 24 0.8539
5.0 30 0.8680
6.0 36 0.8655
7.0 42 0.8727
8.0 48 0.8727
8.3333 50 0.8727
9.0 54 0.8727
10.0 60 0.8727
1.0 6 0.8738
2.0 12 0.8550
3.0 18 0.8550
4.0 24 0.8440
5.0 30 0.8465
6.0 36 0.8465
7.0 42 0.8465
8.0 48 0.8465
8.3333 50 0.8465
9.0 54 0.8465
10.0 60 0.8465
1.0 4 0.7031
2.0 8 0.7123
3.0 12 0.7160
4.0 16 0.7133
5.0 20 0.7157
6.0 24 0.7189
7.0 28 0.7193
8.0 32 0.7193
9.0 36 0.7193
10.0 40 0.7193

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.4.1
  • Transformers: 4.48.3
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.3.1
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
5
Safetensors
Model size
334M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for ashwinpatti/finetuned_arctic_kg_ft-legal-ft-v0

Finetuned
(58)
this model

Evaluation results