metadata
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6300
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-base-en-v1.5
widget:
- source_sentence: >-
Favorable resolution of tax positions would be recognized as a reduction
to the effective income tax rate in the period of resolution.
sentences:
- What was the operational trend for voice connections from 2021 to 2023?
- >-
How is a favorable resolution of a tax position recognized in financial
terms?
- >-
What was the amount of cash generated from operations by the company in
fiscal year 2023?
- source_sentence: >-
The cumulative basis adjustments associated with these hedging
relationships are a reduction of the amortized cost basis of the closed
portfolios of $19 million.
sentences:
- >-
What was the reduction in the amortized cost basis of the closed
portfolios due to cumulative basis adjustments in these hedging
relationships?
- >-
How is the inclusion of the financial statements in the IBM's Form 10-K
described?
- >-
What was the percentage increase in Electronic Arts' diluted earnings
per share in the fiscal year ended March 31, 2023?
- source_sentence: >-
Walmart's fintech venture, ONE, provides financial services such as money
orders, prepaid access, money transfers, check cashing, bill payment, and
certain types of installment lending.
sentences:
- >-
What types of financial services are offered through Walmart's fintech
venture, ONE?
- How much cash did FedEx have at the end of May 2023?
- What is the purpose of Visa according to the overview provided?
- source_sentence: >-
Medicare Star Ratings - A portion of each Medicare Advantage plan’s
reimbursement is tied to the plan’s “star ratings.” The star rating system
considers a variety of measures adopted by CMS, including quality of
preventative services, chronic illness management, compliance and overall
customer satisfaction. Only Medicare Advantage plans with an overall star
rating of 4 or more stars (out of 5 stars) are eligible for a quality
bonus in their basic premium rates.
sentences:
- >-
What authority does the Macao government have over VML's
recapitalization plans?
- >-
How do Medicare Advantage star ratings impact the financial
reimbursements of plans?
- >-
How are the adjusted income from operations and gross profit figures
calculated for the company?
- source_sentence: >-
For the year ended December 31, 2022, the free cash flow reported was
-$11,569 million.
sentences:
- What new initiative did Dollar Tree announce in September 2021?
- >-
What was the amount of deferred net loss on derivatives included in
accumulated other comprehensive income as of December 31, 2023?
- >-
What was the free cash flow reported for the year ended December 31,
2022?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.7028571428571428
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8242857142857143
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8585714285714285
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8942857142857142
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7028571428571428
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2747619047619047
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1717142857142857
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08942857142857143
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7028571428571428
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8242857142857143
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8585714285714285
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8942857142857142
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8013128307721423
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7712681405895693
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7753497571186561
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.7071428571428572
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8228571428571428
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8571428571428571
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8928571428571429
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7071428571428572
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2742857142857143
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1714285714285714
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08928571428571427
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7071428571428572
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8228571428571428
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8571428571428571
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8928571428571429
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8020949349247011
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7728367346938774
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7769793740668877
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.7057142857142857
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8185714285714286
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.85
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8914285714285715
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7057142857142857
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.27285714285714285
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16999999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08914285714285713
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7057142857142857
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8185714285714286
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.85
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8914285714285715
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7985100284142371
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.768848072562358
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7728585204433037
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.6842857142857143
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8085714285714286
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8442857142857143
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8842857142857142
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6842857142857143
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.26952380952380955
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16885714285714284
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08842857142857141
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6842857142857143
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8085714285714286
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8442857142857143
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8842857142857142
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7851722154382534
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.753300453514739
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7577938812506425
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.6685714285714286
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.78
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8114285714285714
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8628571428571429
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6685714285714286
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.26
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16228571428571426
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08628571428571427
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6685714285714286
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.78
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8114285714285714
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8628571428571429
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7647477473058039
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7335680272108841
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7387414091286255
name: Cosine Map@100
BGE base Financial Matryoshka
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-base-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- json
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("kenoc/bge-base-financial-matryoshka")
# Run inference
sentences = [
'For the year ended December 31, 2022, the free cash flow reported was -$11,569 million.',
'What was the free cash flow reported for the year ended December 31, 2022?',
'What was the amount of deferred net loss on derivatives included in accumulated other comprehensive income as of December 31, 2023?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Datasets:
dim_768
,dim_512
,dim_256
,dim_128
anddim_64
- Evaluated with
InformationRetrievalEvaluator
Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
---|---|---|---|---|---|
cosine_accuracy@1 | 0.7029 | 0.7071 | 0.7057 | 0.6843 | 0.6686 |
cosine_accuracy@3 | 0.8243 | 0.8229 | 0.8186 | 0.8086 | 0.78 |
cosine_accuracy@5 | 0.8586 | 0.8571 | 0.85 | 0.8443 | 0.8114 |
cosine_accuracy@10 | 0.8943 | 0.8929 | 0.8914 | 0.8843 | 0.8629 |
cosine_precision@1 | 0.7029 | 0.7071 | 0.7057 | 0.6843 | 0.6686 |
cosine_precision@3 | 0.2748 | 0.2743 | 0.2729 | 0.2695 | 0.26 |
cosine_precision@5 | 0.1717 | 0.1714 | 0.17 | 0.1689 | 0.1623 |
cosine_precision@10 | 0.0894 | 0.0893 | 0.0891 | 0.0884 | 0.0863 |
cosine_recall@1 | 0.7029 | 0.7071 | 0.7057 | 0.6843 | 0.6686 |
cosine_recall@3 | 0.8243 | 0.8229 | 0.8186 | 0.8086 | 0.78 |
cosine_recall@5 | 0.8586 | 0.8571 | 0.85 | 0.8443 | 0.8114 |
cosine_recall@10 | 0.8943 | 0.8929 | 0.8914 | 0.8843 | 0.8629 |
cosine_ndcg@10 | 0.8013 | 0.8021 | 0.7985 | 0.7852 | 0.7647 |
cosine_mrr@10 | 0.7713 | 0.7728 | 0.7688 | 0.7533 | 0.7336 |
cosine_map@100 | 0.7753 | 0.777 | 0.7729 | 0.7578 | 0.7387 |
Training Details
Training Dataset
json
- Dataset: json
- Size: 6,300 training samples
- Columns:
positive
andanchor
- Approximate statistics based on the first 1000 samples:
positive anchor type string string details - min: 2 tokens
- mean: 46.63 tokens
- max: 439 tokens
- min: 2 tokens
- mean: 20.63 tokens
- max: 45 tokens
- Samples:
positive anchor During fiscal year 2023, 276 billion payments and cash transactions with Visa’s brand were processed by Visa or other networks.
What significant milestone of transactions did Visa reach during fiscal year 2023?
The AMPTC for microinverters decreases by 25% each year beginning in 2030 and ending after 2032.
What is the trajectory of the AMPTC for microinverters starting in 2030?
Revenue increased in 2023 driven by increased volume and higher realized prices. The increase in revenue in 2023 was primarily driven by sales of Mounjaro®, Verzenio®, Jardiance®, as well as the sales of the rights for the olanzapine portfolio, including Zyprexa®, and for Baqsimi®, partially offset by the absence of revenue from COVID-19 antibodies and lower sales of Alimta® following the entry of multiple generics in the first half of 2022.
What factors contributed to the increase in revenue in 2023?
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 16per_device_eval_batch_size
: 16gradient_accumulation_steps
: 8learning_rate
: 2e-05num_train_epochs
: 4lr_scheduler_type
: cosinewarmup_ratio
: 0.1bf16
: Trueload_best_model_at_end
: Trueoptim
: adamw_torch_fusedbatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 8eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 4max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
---|---|---|---|---|---|---|---|
0.2030 | 10 | 1.1555 | - | - | - | - | - |
0.4061 | 20 | 0.7503 | - | - | - | - | - |
0.6091 | 30 | 0.4782 | - | - | - | - | - |
0.8122 | 40 | 0.3436 | - | - | - | - | - |
1.0 | 50 | 0.361 | 0.7942 | 0.7943 | 0.7905 | 0.7770 | 0.7428 |
1.2030 | 60 | 0.3078 | - | - | - | - | - |
1.4061 | 70 | 0.2375 | - | - | - | - | - |
1.6091 | 80 | 0.1683 | - | - | - | - | - |
1.8122 | 90 | 0.1412 | - | - | - | - | - |
2.0 | 100 | 0.1431 | 0.7994 | 0.8003 | 0.7980 | 0.7828 | 0.7577 |
2.2030 | 110 | 0.1308 | - | - | - | - | - |
2.4061 | 120 | 0.1188 | - | - | - | - | - |
2.6091 | 130 | 0.0952 | - | - | - | - | - |
2.8122 | 140 | 0.0806 | - | - | - | - | - |
3.0 | 150 | 0.0832 | 0.8019 | 0.8009 | 0.7983 | 0.7844 | 0.7660 |
3.2030 | 160 | 0.1044 | - | - | - | - | - |
3.4061 | 170 | 0.0984 | - | - | - | - | - |
3.6091 | 180 | 0.0838 | - | - | - | - | - |
3.8122 | 190 | 0.0768 | - | - | - | - | - |
3.934 | 196 | - | 0.8013 | 0.8021 | 0.7985 | 0.7852 | 0.7647 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 2.19.2
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}