|
--- |
|
tags: |
|
- sentence-transformers |
|
- sentence-similarity |
|
- feature-extraction |
|
- generated_from_trainer |
|
- dataset_size:400 |
|
- loss:MatryoshkaLoss |
|
- loss:MultipleNegativesRankingLoss |
|
base_model: Snowflake/snowflake-arctic-embed-l |
|
widget: |
|
- source_sentence: Why is the use of AI systems particularly important for individuals |
|
applying for or receiving public assistance benefits? |
|
sentences: |
|
- (48) |
|
- Another area in which the use of AI systems deserves special consideration is |
|
the access to and enjoyment of certain essential private and public services and |
|
benefits necessary for people to fully participate in society or to improve one’s |
|
standard of living. In particular, natural persons applying for or receiving essential |
|
public assistance benefits and services from public authorities namely healthcare |
|
services, social security benefits, social services providing protection in cases |
|
such as maternity, illness, industrial accidents, dependency or old age and loss |
|
of employment and social and housing assistance, are typically dependent on those |
|
benefits and services and in a vulnerable position in relation to the responsible |
|
authorities. |
|
- used for biometric verification, which includes authentication, the sole purpose |
|
of which is to confirm that a specific natural person is the person he or she |
|
claims to be and to confirm the identity of a natural person for the sole purpose |
|
of having access to a service, unlocking a device or having security access to |
|
premises. That exclusion is justified by the fact that such systems are likely |
|
to have a minor impact on fundamental rights of natural persons compared to the |
|
remote biometric identification systems which may be used for the processing of |
|
the biometric data of a large number of persons without their active involvement. |
|
In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison |
|
and the |
|
- source_sentence: How does the context ensure that existing Union law on personal |
|
data processing remains unaffected? |
|
sentences: |
|
- does not seek to affect the application of existing Union law governing the processing |
|
of personal data, including the tasks and powers of the independent supervisory |
|
authorities competent to monitor compliance with those instruments. It also does |
|
not affect the obligations of providers and deployers of AI systems in their role |
|
as data controllers or processors stemming from Union or national law on the protection |
|
of personal data in so far as the design, the development or the use of AI systems |
|
involves the processing of personal data. It is also appropriate to clarify that |
|
data subjects continue to enjoy all the rights and guarantees awarded to them |
|
by such Union law, including the rights related to solely automated individual |
|
- to operate without human intervention. The adaptiveness that an AI system could |
|
exhibit after deployment, refers to self-learning capabilities, allowing the system |
|
to change while in use. AI systems can be used on a stand-alone basis or as a component |
|
of a product, irrespective of whether the system is physically integrated into |
|
the product (embedded) or serves the functionality of the product without being |
|
integrated therein (non-embedded). |
|
- requested by the European Parliament (6). |
|
- source_sentence: How does the context surrounding the number 33 influence its interpretation? |
|
sentences: |
|
- race, sex or disabilities. In addition, the immediacy of the impact and the limited |
|
opportunities for further checks or corrections in relation to the use of such |
|
systems operating in real-time carry heightened risks for the rights and freedoms |
|
of the persons concerned in the context of, or impacted by, law enforcement activities. |
|
- (33) |
|
- (61) |
|
- source_sentence: What are the potential consequences of a serious disruption of |
|
critical infrastructure as defined in Directive (EU) 2022/2557? |
|
sentences: |
|
- to highly varying degrees for the practical pursuit of the localisation or identification |
|
of a perpetrator or suspect of the different criminal offences listed and having |
|
regard to the likely differences in the seriousness, probability and scale of |
|
the harm or possible negative consequences. An imminent threat to life or the |
|
physical safety of natural persons could also result from a serious disruption |
|
of critical infrastructure, as defined in Article 2, point (4) of Directive (EU) |
|
2022/2557 of the European Parliament and of the Council (19), where the disruption |
|
or destruction of such critical infrastructure would result in an imminent threat |
|
to life or the physical safety of a person, including through serious harm to |
|
the provision of |
|
- (53) |
|
- '(66) |
|
|
|
|
|
|
|
Requirements should apply to high-risk AI systems as regards risk management, |
|
the quality and relevance of data sets used, technical documentation and record-keeping, |
|
transparency and the provision of information to deployers, human oversight, and |
|
robustness, accuracy and cybersecurity. Those requirements are necessary to effectively |
|
mitigate the risks for health, safety and fundamental rights. As no other less |
|
trade restrictive measures are reasonably available those requirements are not |
|
unjustified restrictions to trade. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
(67)' |
|
- source_sentence: What criteria determine whether an AI system used in the administration |
|
of justice is classified as high-risk? |
|
sentences: |
|
- which one or more of the following conditions are fulfilled. The first such condition |
|
should be that the AI system is intended to perform a narrow procedural task, |
|
such as an AI system that transforms unstructured data into structured data, an |
|
AI system that classifies incoming documents into categories or an AI system that |
|
is used to detect duplicates among a large number of applications. Those tasks |
|
are of such narrow and limited nature that they pose only limited risks which |
|
are not increased through the use of an AI system in a context that is listed |
|
as a high-risk use in an annex to this Regulation. The second condition should |
|
be that the task performed by the AI system is intended to improve the result |
|
of a previously completed human |
|
- Certain AI systems intended for the administration of justice and democratic processes |
|
should be classified as high-risk, considering their potentially significant impact |
|
on democracy, the rule of law, individual freedoms as well as the right to an |
|
effective remedy and to a fair trial. In particular, to address the risks of potential |
|
biases, errors and opacity, it is appropriate to qualify as high-risk AI systems |
|
intended to be used by a judicial authority or on its behalf to assist judicial |
|
authorities in researching and interpreting facts and the law and in applying |
|
the law to a concrete set of facts. AI systems intended to be used by alternative |
|
dispute resolution bodies for those purposes should also be considered to be high-risk |
|
when |
|
- As regards AI systems that are safety components of products, or which are themselves |
|
products, falling within the scope of certain Union harmonisation legislation |
|
listed in an annex to this Regulation, it is appropriate to classify them as high-risk |
|
under this Regulation if the product concerned undergoes the conformity assessment |
|
procedure with a third-party conformity assessment body pursuant to that relevant |
|
Union harmonisation legislation. In particular, such products are machinery, toys, |
|
lifts, equipment and protective systems intended for use in potentially explosive |
|
atmospheres, radio equipment, pressure equipment, recreational craft equipment, |
|
cableway installations, appliances burning gaseous fuels, medical devices, in |
|
vitro |
|
pipeline_tag: sentence-similarity |
|
library_name: sentence-transformers |
|
metrics: |
|
- cosine_accuracy@1 |
|
- cosine_accuracy@3 |
|
- cosine_accuracy@5 |
|
- cosine_accuracy@10 |
|
- cosine_precision@1 |
|
- cosine_precision@3 |
|
- cosine_precision@5 |
|
- cosine_precision@10 |
|
- cosine_recall@1 |
|
- cosine_recall@3 |
|
- cosine_recall@5 |
|
- cosine_recall@10 |
|
- cosine_ndcg@10 |
|
- cosine_mrr@10 |
|
- cosine_map@100 |
|
model-index: |
|
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l |
|
results: |
|
- task: |
|
type: information-retrieval |
|
name: Information Retrieval |
|
dataset: |
|
name: Unknown |
|
type: unknown |
|
metrics: |
|
- type: cosine_accuracy@1 |
|
value: 0.9583333333333334 |
|
name: Cosine Accuracy@1 |
|
- type: cosine_accuracy@3 |
|
value: 1.0 |
|
name: Cosine Accuracy@3 |
|
- type: cosine_accuracy@5 |
|
value: 1.0 |
|
name: Cosine Accuracy@5 |
|
- type: cosine_accuracy@10 |
|
value: 1.0 |
|
name: Cosine Accuracy@10 |
|
- type: cosine_precision@1 |
|
value: 0.9583333333333334 |
|
name: Cosine Precision@1 |
|
- type: cosine_precision@3 |
|
value: 0.3333333333333333 |
|
name: Cosine Precision@3 |
|
- type: cosine_precision@5 |
|
value: 0.19999999999999998 |
|
name: Cosine Precision@5 |
|
- type: cosine_precision@10 |
|
value: 0.09999999999999999 |
|
name: Cosine Precision@10 |
|
- type: cosine_recall@1 |
|
value: 0.9583333333333334 |
|
name: Cosine Recall@1 |
|
- type: cosine_recall@3 |
|
value: 1.0 |
|
name: Cosine Recall@3 |
|
- type: cosine_recall@5 |
|
value: 1.0 |
|
name: Cosine Recall@5 |
|
- type: cosine_recall@10 |
|
value: 1.0 |
|
name: Cosine Recall@10 |
|
- type: cosine_ndcg@10 |
|
value: 0.9791666666666666 |
|
name: Cosine Ndcg@10 |
|
- type: cosine_mrr@10 |
|
value: 0.9722222222222223 |
|
name: Cosine Mrr@10 |
|
- type: cosine_map@100 |
|
value: 0.9722222222222222 |
|
name: Cosine Map@100 |
|
--- |
|
|
|
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l |
|
|
|
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
- **Model Type:** Sentence Transformer |
|
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b --> |
|
- **Maximum Sequence Length:** 512 tokens |
|
- **Output Dimensionality:** 1024 dimensions |
|
- **Similarity Function:** Cosine Similarity |
|
<!-- - **Training Dataset:** Unknown --> |
|
<!-- - **Language:** Unknown --> |
|
<!-- - **License:** Unknown --> |
|
|
|
### Model Sources |
|
|
|
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net) |
|
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) |
|
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) |
|
|
|
### Full Model Architecture |
|
|
|
``` |
|
SentenceTransformer( |
|
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel |
|
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) |
|
(2): Normalize() |
|
) |
|
``` |
|
|
|
## Usage |
|
|
|
### Direct Usage (Sentence Transformers) |
|
|
|
First install the Sentence Transformers library: |
|
|
|
```bash |
|
pip install -U sentence-transformers |
|
``` |
|
|
|
Then you can load this model and run inference. |
|
```python |
|
from sentence_transformers import SentenceTransformer |
|
|
|
# Download from the 🤗 Hub |
|
model = SentenceTransformer("llm-wizard/legal-ft-1") |
|
# Run inference |
|
sentences = [ |
|
'What criteria determine whether an AI system used in the administration of justice is classified as high-risk?', |
|
'Certain AI systems intended for the administration of justice and democratic processes should be classified as high-risk, considering their potentially significant impact on democracy, the rule of law, individual freedoms as well as the right to an effective remedy and to a\xa0fair trial. In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-risk AI systems intended to be used by a\xa0judicial authority or on its behalf to assist judicial authorities in researching and interpreting facts and the law and in applying the law to a\xa0concrete set of facts. AI systems intended to be used by alternative dispute resolution bodies for those purposes should also be considered to be high-risk when', |
|
'which one or more of the following conditions are fulfilled. The first such condition should be that the AI system is intended to perform a\xa0narrow procedural task, such as an AI system that transforms unstructured data into structured data, an AI system that classifies incoming documents into categories or an AI system that is used to detect duplicates among a\xa0large number of applications. Those tasks are of such narrow and limited nature that they pose only limited risks which are not increased through the use of an AI system in a\xa0context that is listed as a\xa0high-risk use in an annex to this Regulation. The second condition should be that the task performed by the AI system is intended to improve the result of a\xa0previously completed human', |
|
] |
|
embeddings = model.encode(sentences) |
|
print(embeddings.shape) |
|
# [3, 1024] |
|
|
|
# Get the similarity scores for the embeddings |
|
similarities = model.similarity(embeddings, embeddings) |
|
print(similarities.shape) |
|
# [3, 3] |
|
``` |
|
|
|
<!-- |
|
### Direct Usage (Transformers) |
|
|
|
<details><summary>Click to see the direct usage in Transformers</summary> |
|
|
|
</details> |
|
--> |
|
|
|
<!-- |
|
### Downstream Usage (Sentence Transformers) |
|
|
|
You can finetune this model on your own dataset. |
|
|
|
<details><summary>Click to expand</summary> |
|
|
|
</details> |
|
--> |
|
|
|
<!-- |
|
### Out-of-Scope Use |
|
|
|
*List how the model may foreseeably be misused and address what users ought not to do with the model.* |
|
--> |
|
|
|
## Evaluation |
|
|
|
### Metrics |
|
|
|
#### Information Retrieval |
|
|
|
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) |
|
|
|
| Metric | Value | |
|
|:--------------------|:-----------| |
|
| cosine_accuracy@1 | 0.9583 | |
|
| cosine_accuracy@3 | 1.0 | |
|
| cosine_accuracy@5 | 1.0 | |
|
| cosine_accuracy@10 | 1.0 | |
|
| cosine_precision@1 | 0.9583 | |
|
| cosine_precision@3 | 0.3333 | |
|
| cosine_precision@5 | 0.2 | |
|
| cosine_precision@10 | 0.1 | |
|
| cosine_recall@1 | 0.9583 | |
|
| cosine_recall@3 | 1.0 | |
|
| cosine_recall@5 | 1.0 | |
|
| cosine_recall@10 | 1.0 | |
|
| **cosine_ndcg@10** | **0.9792** | |
|
| cosine_mrr@10 | 0.9722 | |
|
| cosine_map@100 | 0.9722 | |
|
|
|
<!-- |
|
## Bias, Risks and Limitations |
|
|
|
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* |
|
--> |
|
|
|
<!-- |
|
### Recommendations |
|
|
|
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* |
|
--> |
|
|
|
## Training Details |
|
|
|
### Training Dataset |
|
|
|
#### Unnamed Dataset |
|
|
|
* Size: 400 training samples |
|
* Columns: <code>sentence_0</code> and <code>sentence_1</code> |
|
* Approximate statistics based on the first 400 samples: |
|
| | sentence_0 | sentence_1 | |
|
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| |
|
| type | string | string | |
|
| details | <ul><li>min: 10 tokens</li><li>mean: 20.52 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 93.01 tokens</li><li>max: 186 tokens</li></ul> | |
|
* Samples: |
|
| sentence_0 | sentence_1 | |
|
|:--------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |
|
| <code>What are the intended uses of AI systems by tax and customs authorities according to the context?</code> | <code>natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities as well as by financial intelligence units carrying out administrative tasks analysing information pursuant to Union anti-money laundering law should not be classified as high-risk AI systems used by law enforcement authorities for the purpose of prevention, detection, investigation and prosecution of criminal offences. The use of AI tools by law enforcement and other relevant authorities should not become a factor of inequality, or exclusion. The impact of the use of AI tools on the defence rights of suspects should</code> | |
|
| <code>How should the use of AI tools by law enforcement authorities be managed to prevent inequality or exclusion?</code> | <code>natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities as well as by financial intelligence units carrying out administrative tasks analysing information pursuant to Union anti-money laundering law should not be classified as high-risk AI systems used by law enforcement authorities for the purpose of prevention, detection, investigation and prosecution of criminal offences. The use of AI tools by law enforcement and other relevant authorities should not become a factor of inequality, or exclusion. The impact of the use of AI tools on the defence rights of suspects should</code> | |
|
| <code>What was requested by the European Parliament?</code> | <code>requested by the European Parliament (6).</code> | |
|
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: |
|
```json |
|
{ |
|
"loss": "MultipleNegativesRankingLoss", |
|
"matryoshka_dims": [ |
|
768, |
|
512, |
|
256, |
|
128, |
|
64 |
|
], |
|
"matryoshka_weights": [ |
|
1, |
|
1, |
|
1, |
|
1, |
|
1 |
|
], |
|
"n_dims_per_step": -1 |
|
} |
|
``` |
|
|
|
### Training Hyperparameters |
|
#### Non-Default Hyperparameters |
|
|
|
- `eval_strategy`: steps |
|
- `per_device_train_batch_size`: 10 |
|
- `per_device_eval_batch_size`: 10 |
|
- `num_train_epochs`: 10 |
|
- `multi_dataset_batch_sampler`: round_robin |
|
|
|
#### All Hyperparameters |
|
<details><summary>Click to expand</summary> |
|
|
|
- `overwrite_output_dir`: False |
|
- `do_predict`: False |
|
- `eval_strategy`: steps |
|
- `prediction_loss_only`: True |
|
- `per_device_train_batch_size`: 10 |
|
- `per_device_eval_batch_size`: 10 |
|
- `per_gpu_train_batch_size`: None |
|
- `per_gpu_eval_batch_size`: None |
|
- `gradient_accumulation_steps`: 1 |
|
- `eval_accumulation_steps`: None |
|
- `torch_empty_cache_steps`: None |
|
- `learning_rate`: 5e-05 |
|
- `weight_decay`: 0.0 |
|
- `adam_beta1`: 0.9 |
|
- `adam_beta2`: 0.999 |
|
- `adam_epsilon`: 1e-08 |
|
- `max_grad_norm`: 1 |
|
- `num_train_epochs`: 10 |
|
- `max_steps`: -1 |
|
- `lr_scheduler_type`: linear |
|
- `lr_scheduler_kwargs`: {} |
|
- `warmup_ratio`: 0.0 |
|
- `warmup_steps`: 0 |
|
- `log_level`: passive |
|
- `log_level_replica`: warning |
|
- `log_on_each_node`: True |
|
- `logging_nan_inf_filter`: True |
|
- `save_safetensors`: True |
|
- `save_on_each_node`: False |
|
- `save_only_model`: False |
|
- `restore_callback_states_from_checkpoint`: False |
|
- `no_cuda`: False |
|
- `use_cpu`: False |
|
- `use_mps_device`: False |
|
- `seed`: 42 |
|
- `data_seed`: None |
|
- `jit_mode_eval`: False |
|
- `use_ipex`: False |
|
- `bf16`: False |
|
- `fp16`: False |
|
- `fp16_opt_level`: O1 |
|
- `half_precision_backend`: auto |
|
- `bf16_full_eval`: False |
|
- `fp16_full_eval`: False |
|
- `tf32`: None |
|
- `local_rank`: 0 |
|
- `ddp_backend`: None |
|
- `tpu_num_cores`: None |
|
- `tpu_metrics_debug`: False |
|
- `debug`: [] |
|
- `dataloader_drop_last`: False |
|
- `dataloader_num_workers`: 0 |
|
- `dataloader_prefetch_factor`: None |
|
- `past_index`: -1 |
|
- `disable_tqdm`: False |
|
- `remove_unused_columns`: True |
|
- `label_names`: None |
|
- `load_best_model_at_end`: False |
|
- `ignore_data_skip`: False |
|
- `fsdp`: [] |
|
- `fsdp_min_num_params`: 0 |
|
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} |
|
- `fsdp_transformer_layer_cls_to_wrap`: None |
|
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} |
|
- `deepspeed`: None |
|
- `label_smoothing_factor`: 0.0 |
|
- `optim`: adamw_torch |
|
- `optim_args`: None |
|
- `adafactor`: False |
|
- `group_by_length`: False |
|
- `length_column_name`: length |
|
- `ddp_find_unused_parameters`: None |
|
- `ddp_bucket_cap_mb`: None |
|
- `ddp_broadcast_buffers`: False |
|
- `dataloader_pin_memory`: True |
|
- `dataloader_persistent_workers`: False |
|
- `skip_memory_metrics`: True |
|
- `use_legacy_prediction_loop`: False |
|
- `push_to_hub`: False |
|
- `resume_from_checkpoint`: None |
|
- `hub_model_id`: None |
|
- `hub_strategy`: every_save |
|
- `hub_private_repo`: None |
|
- `hub_always_push`: False |
|
- `gradient_checkpointing`: False |
|
- `gradient_checkpointing_kwargs`: None |
|
- `include_inputs_for_metrics`: False |
|
- `include_for_metrics`: [] |
|
- `eval_do_concat_batches`: True |
|
- `fp16_backend`: auto |
|
- `push_to_hub_model_id`: None |
|
- `push_to_hub_organization`: None |
|
- `mp_parameters`: |
|
- `auto_find_batch_size`: False |
|
- `full_determinism`: False |
|
- `torchdynamo`: None |
|
- `ray_scope`: last |
|
- `ddp_timeout`: 1800 |
|
- `torch_compile`: False |
|
- `torch_compile_backend`: None |
|
- `torch_compile_mode`: None |
|
- `dispatch_batches`: None |
|
- `split_batches`: None |
|
- `include_tokens_per_second`: False |
|
- `include_num_input_tokens_seen`: False |
|
- `neftune_noise_alpha`: None |
|
- `optim_target_modules`: None |
|
- `batch_eval_metrics`: False |
|
- `eval_on_start`: False |
|
- `use_liger_kernel`: False |
|
- `eval_use_gather_object`: False |
|
- `average_tokens_across_devices`: False |
|
- `prompts`: None |
|
- `batch_sampler`: batch_sampler |
|
- `multi_dataset_batch_sampler`: round_robin |
|
|
|
</details> |
|
|
|
### Training Logs |
|
| Epoch | Step | cosine_ndcg@10 | |
|
|:-----:|:----:|:--------------:| |
|
| 1.0 | 40 | 0.9715 | |
|
| 1.25 | 50 | 0.9792 | |
|
| 2.0 | 80 | 0.9792 | |
|
| 2.5 | 100 | 0.9715 | |
|
| 3.0 | 120 | 0.9638 | |
|
| 3.75 | 150 | 0.9715 | |
|
| 4.0 | 160 | 0.9792 | |
|
| 5.0 | 200 | 0.9623 | |
|
| 6.0 | 240 | 0.9777 | |
|
| 6.25 | 250 | 0.9777 | |
|
| 7.0 | 280 | 0.9792 | |
|
| 7.5 | 300 | 0.9715 | |
|
| 8.0 | 320 | 0.9715 | |
|
| 8.75 | 350 | 0.9792 | |
|
| 9.0 | 360 | 0.9792 | |
|
| 10.0 | 400 | 0.9792 | |
|
|
|
|
|
### Framework Versions |
|
- Python: 3.11.11 |
|
- Sentence Transformers: 3.4.1 |
|
- Transformers: 4.48.2 |
|
- PyTorch: 2.5.1+cu124 |
|
- Accelerate: 1.3.0 |
|
- Datasets: 3.2.0 |
|
- Tokenizers: 0.21.0 |
|
|
|
## Citation |
|
|
|
### BibTeX |
|
|
|
#### Sentence Transformers |
|
```bibtex |
|
@inproceedings{reimers-2019-sentence-bert, |
|
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", |
|
author = "Reimers, Nils and Gurevych, Iryna", |
|
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", |
|
month = "11", |
|
year = "2019", |
|
publisher = "Association for Computational Linguistics", |
|
url = "https://arxiv.org/abs/1908.10084", |
|
} |
|
``` |
|
|
|
#### MatryoshkaLoss |
|
```bibtex |
|
@misc{kusupati2024matryoshka, |
|
title={Matryoshka Representation Learning}, |
|
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, |
|
year={2024}, |
|
eprint={2205.13147}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.LG} |
|
} |
|
``` |
|
|
|
#### MultipleNegativesRankingLoss |
|
```bibtex |
|
@misc{henderson2017efficient, |
|
title={Efficient Natural Language Response Suggestion for Smart Reply}, |
|
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, |
|
year={2017}, |
|
eprint={1705.00652}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |
|
|
|
<!-- |
|
## Glossary |
|
|
|
*Clearly define terms in order to be accessible across audiences.* |
|
--> |
|
|
|
<!-- |
|
## Model Card Authors |
|
|
|
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* |
|
--> |
|
|
|
<!-- |
|
## Model Card Contact |
|
|
|
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* |
|
--> |