|
--- |
|
tags: |
|
- sentence-transformers |
|
- sentence-similarity |
|
- feature-extraction |
|
- generated_from_trainer |
|
- dataset_size:156 |
|
- loss:MatryoshkaLoss |
|
- loss:MultipleNegativesRankingLoss |
|
base_model: Snowflake/snowflake-arctic-embed-l |
|
widget: |
|
- source_sentence: In what year does the author expect the prompt-driven custom interface |
|
feature to be widely integrated into products? |
|
sentences: |
|
- '17th: AI for Data Journalism: demonstrating what we can do with this stuff right |
|
now |
|
|
|
|
|
22nd: Options for accessing Llama 3 from the terminal using LLM |
|
|
|
|
|
|
|
|
|
May |
|
|
|
|
|
8th: Slop is the new name for unwanted AI-generated content |
|
|
|
|
|
15th: ChatGPT in “4o” mode is not running the new features yet |
|
|
|
|
|
29th: Training is not the same as chatting: ChatGPT and other LLMs don’t remember |
|
everything you say |
|
|
|
|
|
|
|
|
|
June |
|
|
|
|
|
6th: Accidental prompt injection against RAG applications |
|
|
|
|
|
10th: Thoughts on the WWDC 2024 keynote on Apple Intelligence |
|
|
|
|
|
17th: Language models on the command-line |
|
|
|
|
|
21st: Building search-based RAG using Claude, Datasette and Val Town |
|
|
|
|
|
27th: Open challenges for AI engineering |
|
|
|
|
|
|
|
|
|
July |
|
|
|
|
|
14th: Imitation Intelligence, my keynote for PyCon US 2024' |
|
- 'This prompt-driven custom interface feature is so powerful and easy to build |
|
(once you’ve figured out the gnarly details of browser sandboxing) that I expect |
|
it to show up as a feature in a wide range of products in 2025. |
|
|
|
Universal access to the best models lasted for just a few short months |
|
|
|
For a few short months this year all three of the best available models—GPT-4o, |
|
Claude 3.5 Sonnet and Gemini 1.5 Pro—were freely available to most of the world.' |
|
- 'Terminology aside, I remain skeptical as to their utility based, once again, |
|
on the challenge of gullibility. LLMs believe anything you tell them. Any systems |
|
that attempts to make meaningful decisions on your behalf will run into the same |
|
roadblock: how good is a travel agent, or a digital assistant, or even a research |
|
tool if it can’t distinguish truth from fiction? |
|
|
|
Just the other day Google Search was caught serving up an entirely fake description |
|
of the non-existant movie “Encanto 2”. It turned out to be summarizing an imagined |
|
movie listing from a fan fiction wiki.' |
|
- source_sentence: What notable development in LLM technology occurred in the final |
|
quarter of 2024? |
|
sentences: |
|
- 'The models may have got more capable, but most of the limitations remained the |
|
same. OpenAI’s o1 may finally be able to (mostly) count the Rs in strawberry, |
|
but its abilities are still limited by its nature as an LLM and the constraints |
|
placed on it by the harness it’s running in. o1 can’t run web searches or use |
|
Code Interpreter, but GPT-4o can—both in that same ChatGPT UI. (o1 will pretend |
|
to do those things if you ask it to, a regression to the URL hallucinations bug |
|
from early 2023). |
|
|
|
What are we doing about this? Not much. Most users are thrown in at the deep end. |
|
The default LLM chat UI is like taking brand new computer users, dropping them |
|
into a Linux terminal and expecting them to figure it all out.' |
|
- 'Now that those features are rolling out they’re pretty weak. As an LLM power-user |
|
I know what these models are capable of, and Apple’s LLM features offer a pale |
|
imitation of what a frontier LLM can do. Instead we’re getting notification summaries |
|
that misrepresent news headlines and writing assistant tools that I’ve not found |
|
useful at all. Genmoji are kind of fun though. |
|
|
|
The rise of inference-scaling “reasoning” models |
|
|
|
The most interesting development in the final quarter of 2024 was the introduction |
|
of a new shape of LLM, exemplified by OpenAI’s o1 models—initially released as |
|
o1-preview and o1-mini on September 12th.' |
|
- 'Against this photo of butterflies at the California Academy of Sciences: |
|
|
|
|
|
|
|
A shallow dish, likely a hummingbird or butterfly feeder, is red. Pieces of orange |
|
slices of fruit are visible inside the dish. |
|
|
|
Two butterflies are positioned in the feeder, one is a dark brown/black butterfly |
|
with white/cream-colored markings. The other is a large, brown butterfly with |
|
patterns of lighter brown, beige, and black markings, including prominent eye |
|
spots. The larger brown butterfly appears to be feeding on the fruit.' |
|
- source_sentence: What is the license under which Alibaba's QwQ model was released? |
|
sentences: |
|
- The most recent twist, again from December (December was a lot) is live video. |
|
ChatGPT voice mode now provides the option to share your camera feed with the |
|
model and talk about what you can see in real time. Google Gemini have a preview |
|
of the same feature, which they managed to ship the day before ChatGPT did. |
|
- 'OpenAI are not the only game in town here. Google released their first entrant |
|
in the category, gemini-2.0-flash-thinking-exp, on December 19th. |
|
|
|
Alibaba’s Qwen team released their QwQ model on November 28th—under an Apache |
|
2.0 license, and that one I could run on my own machine. They followed that up |
|
with a vision reasoning model called QvQ on December 24th, which I also ran locally. |
|
|
|
DeepSeek made their DeepSeek-R1-Lite-Preview model available to try out through |
|
their chat interface on November 20th. |
|
|
|
To understand more about inference scaling I recommend Is AI progress slowing |
|
down? by Arvind Narayanan and Sayash Kapoor.' |
|
- 'Stuff we figured out about AI in 2023 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Simon Willison’s Weblog |
|
|
|
Subscribe |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Stuff we figured out about AI in 2023 |
|
|
|
31st December 2023 |
|
|
|
2023 was the breakthrough year for Large Language Models (LLMs). I think it’s |
|
OK to call these AI—they’re the latest and (currently) most interesting development |
|
in the academic field of Artificial Intelligence that dates back to the 1950s. |
|
|
|
Here’s my attempt to round up the highlights in one place!' |
|
- source_sentence: What is the significance of the cost reduction mentioned in the |
|
context regarding LLMs in 2024? |
|
sentences: |
|
- 'I think people who complain that LLM improvement has slowed are often missing |
|
the enormous advances in these multi-modal models. Being able to run prompts against |
|
images (and audio and video) is a fascinating new way to apply these models. |
|
|
|
Voice and live camera mode are science fiction come to life |
|
|
|
The audio and live video modes that have started to emerge deserve a special mention. |
|
|
|
The ability to talk to ChatGPT first arrived in September 2023, but it was mostly |
|
an illusion: OpenAI used their excellent Whisper speech-to-text model and a new |
|
text-to-speech model (creatively named tts-1) to enable conversations with the |
|
ChatGPT mobile apps, but the actual model just saw text.' |
|
- 'I like people who are skeptical of this stuff. The hype has been deafening for |
|
more than two years now, and there are enormous quantities of snake oil and misinformation |
|
out there. A lot of very bad decisions are being made based on that hype. Being |
|
critical is a virtue. |
|
|
|
If we want people with decision-making authority to make good decisions about |
|
how to apply these tools we first need to acknowledge that there ARE good applications, |
|
and then help explain how to put those into practice while avoiding the many unintiutive |
|
traps. |
|
|
|
(If you still don’t think there are any good applications at all I’m not sure |
|
why you made it to this point in the article!)' |
|
- '260 input tokens, 92 output tokens. Cost approximately 0.0024 cents (that’s less |
|
than a 400th of a cent). |
|
|
|
This increase in efficiency and reduction in price is my single favourite trend |
|
from 2024. I want the utility of LLMs at a fraction of the energy cost and it |
|
looks like that’s what we’re getting. |
|
|
|
Multimodal vision is common, audio and video are starting to emerge |
|
|
|
My butterfly example above illustrates another key trend from 2024: the rise of |
|
multi-modal LLMs. |
|
|
|
A year ago the single most notable example of these was GPT-4 Vision, released |
|
at OpenAI’s DevDay in November 2023. Google’s multi-modal Gemini 1.0 was announced |
|
on December 7th 2023 so it also (just) makes it into the 2023 window.' |
|
- source_sentence: How does the author feel about their choice of platform as a Mac |
|
user this year compared to last year? |
|
sentences: |
|
- 'I’m still trying to figure out the best patterns for doing this for my own work. |
|
Everyone knows that evals are important, but there remains a lack of great guidance |
|
for how to best implement them—I’m tracking this under my evals tag. My SVG pelican |
|
riding a bicycle benchmark is a pale imitation of what a real eval suite should |
|
look like. |
|
|
|
Apple Intelligence is bad, Apple’s MLX library is excellent |
|
|
|
As a Mac user I’ve been feeling a lot better about my choice of platform this |
|
year. |
|
|
|
Last year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU |
|
was a huge disadvantage in terms of trying out new models.' |
|
- 'The GPT-4 barrier was comprehensively broken |
|
|
|
Some of those GPT-4 models run on my laptop |
|
|
|
LLM prices crashed, thanks to competition and increased efficiency |
|
|
|
Multimodal vision is common, audio and video are starting to emerge |
|
|
|
Voice and live camera mode are science fiction come to life |
|
|
|
Prompt driven app generation is a commodity already |
|
|
|
Universal access to the best models lasted for just a few short months |
|
|
|
“Agents” still haven’t really happened yet |
|
|
|
Evals really matter |
|
|
|
Apple Intelligence is bad, Apple’s MLX library is excellent |
|
|
|
The rise of inference-scaling “reasoning” models |
|
|
|
Was the best currently available LLM trained in China for less than $6m? |
|
|
|
The environmental impact got better |
|
|
|
The environmental impact got much, much worse' |
|
- Structured and Gradual Learning. In organic datasets, the relationship between |
|
tokens is often complex and indirect. Many reasoning steps may be required to |
|
connect the current token to the next, making it challenging for the model to |
|
learn effectively from next-token prediction. By contrast, each token generated |
|
by a language model is by definition predicted by the preceding tokens, making |
|
it easier for a model to follow the resulting reasoning patterns. |
|
pipeline_tag: sentence-similarity |
|
library_name: sentence-transformers |
|
metrics: |
|
- cosine_accuracy@1 |
|
- cosine_accuracy@3 |
|
- cosine_accuracy@5 |
|
- cosine_accuracy@10 |
|
- cosine_precision@1 |
|
- cosine_precision@3 |
|
- cosine_precision@5 |
|
- cosine_precision@10 |
|
- cosine_recall@1 |
|
- cosine_recall@3 |
|
- cosine_recall@5 |
|
- cosine_recall@10 |
|
- cosine_ndcg@10 |
|
- cosine_mrr@10 |
|
- cosine_map@100 |
|
model-index: |
|
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l |
|
results: |
|
- task: |
|
type: information-retrieval |
|
name: Information Retrieval |
|
dataset: |
|
name: Unknown |
|
type: unknown |
|
metrics: |
|
- type: cosine_accuracy@1 |
|
value: 0.7916666666666666 |
|
name: Cosine Accuracy@1 |
|
- type: cosine_accuracy@3 |
|
value: 1.0 |
|
name: Cosine Accuracy@3 |
|
- type: cosine_accuracy@5 |
|
value: 1.0 |
|
name: Cosine Accuracy@5 |
|
- type: cosine_accuracy@10 |
|
value: 1.0 |
|
name: Cosine Accuracy@10 |
|
- type: cosine_precision@1 |
|
value: 0.7916666666666666 |
|
name: Cosine Precision@1 |
|
- type: cosine_precision@3 |
|
value: 0.3333333333333333 |
|
name: Cosine Precision@3 |
|
- type: cosine_precision@5 |
|
value: 0.20000000000000004 |
|
name: Cosine Precision@5 |
|
- type: cosine_precision@10 |
|
value: 0.10000000000000002 |
|
name: Cosine Precision@10 |
|
- type: cosine_recall@1 |
|
value: 0.7916666666666666 |
|
name: Cosine Recall@1 |
|
- type: cosine_recall@3 |
|
value: 1.0 |
|
name: Cosine Recall@3 |
|
- type: cosine_recall@5 |
|
value: 1.0 |
|
name: Cosine Recall@5 |
|
- type: cosine_recall@10 |
|
value: 1.0 |
|
name: Cosine Recall@10 |
|
- type: cosine_ndcg@10 |
|
value: 0.923110365327387 |
|
name: Cosine Ndcg@10 |
|
- type: cosine_mrr@10 |
|
value: 0.8958333333333334 |
|
name: Cosine Mrr@10 |
|
- type: cosine_map@100 |
|
value: 0.8958333333333334 |
|
name: Cosine Map@100 |
|
--- |
|
|
|
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l |
|
|
|
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
- **Model Type:** Sentence Transformer |
|
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b --> |
|
- **Maximum Sequence Length:** 512 tokens |
|
- **Output Dimensionality:** 1024 dimensions |
|
- **Similarity Function:** Cosine Similarity |
|
<!-- - **Training Dataset:** Unknown --> |
|
<!-- - **Language:** Unknown --> |
|
<!-- - **License:** Unknown --> |
|
|
|
### Model Sources |
|
|
|
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net) |
|
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) |
|
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) |
|
|
|
### Full Model Architecture |
|
|
|
``` |
|
SentenceTransformer( |
|
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel |
|
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) |
|
(2): Normalize() |
|
) |
|
``` |
|
|
|
## Usage |
|
|
|
### Direct Usage (Sentence Transformers) |
|
|
|
First install the Sentence Transformers library: |
|
|
|
```bash |
|
pip install -U sentence-transformers |
|
``` |
|
|
|
Then you can load this model and run inference. |
|
```python |
|
from sentence_transformers import SentenceTransformer |
|
|
|
# Download from the 🤗 Hub |
|
model = SentenceTransformer("thomfoolery/legal-ft-v0") |
|
# Run inference |
|
sentences = [ |
|
'How does the author feel about their choice of platform as a Mac user this year compared to last year?', |
|
'I’m still trying to figure out the best patterns for doing this for my own work. Everyone knows that evals are important, but there remains a lack of great guidance for how to best implement them—I’m tracking this under my evals tag. My SVG pelican riding a bicycle benchmark is a pale imitation of what a real eval suite should look like.\nApple Intelligence is bad, Apple’s MLX library is excellent\nAs a Mac user I’ve been feeling a lot better about my choice of platform this year.\nLast year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU was a huge disadvantage in terms of trying out new models.', |
|
'Structured and Gradual Learning. In organic datasets, the relationship between tokens is often complex and indirect. Many reasoning steps may be required to connect the current token to the next, making it challenging for the model to learn effectively from next-token prediction. By contrast, each token generated by a language model is by definition predicted by the preceding tokens, making it easier for a model to follow the resulting reasoning patterns.', |
|
] |
|
embeddings = model.encode(sentences) |
|
print(embeddings.shape) |
|
# [3, 1024] |
|
|
|
# Get the similarity scores for the embeddings |
|
similarities = model.similarity(embeddings, embeddings) |
|
print(similarities.shape) |
|
# [3, 3] |
|
``` |
|
|
|
<!-- |
|
### Direct Usage (Transformers) |
|
|
|
<details><summary>Click to see the direct usage in Transformers</summary> |
|
|
|
</details> |
|
--> |
|
|
|
<!-- |
|
### Downstream Usage (Sentence Transformers) |
|
|
|
You can finetune this model on your own dataset. |
|
|
|
<details><summary>Click to expand</summary> |
|
|
|
</details> |
|
--> |
|
|
|
<!-- |
|
### Out-of-Scope Use |
|
|
|
*List how the model may foreseeably be misused and address what users ought not to do with the model.* |
|
--> |
|
|
|
## Evaluation |
|
|
|
### Metrics |
|
|
|
#### Information Retrieval |
|
|
|
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) |
|
|
|
| Metric | Value | |
|
|:--------------------|:-----------| |
|
| cosine_accuracy@1 | 0.7917 | |
|
| cosine_accuracy@3 | 1.0 | |
|
| cosine_accuracy@5 | 1.0 | |
|
| cosine_accuracy@10 | 1.0 | |
|
| cosine_precision@1 | 0.7917 | |
|
| cosine_precision@3 | 0.3333 | |
|
| cosine_precision@5 | 0.2 | |
|
| cosine_precision@10 | 0.1 | |
|
| cosine_recall@1 | 0.7917 | |
|
| cosine_recall@3 | 1.0 | |
|
| cosine_recall@5 | 1.0 | |
|
| cosine_recall@10 | 1.0 | |
|
| **cosine_ndcg@10** | **0.9231** | |
|
| cosine_mrr@10 | 0.8958 | |
|
| cosine_map@100 | 0.8958 | |
|
|
|
<!-- |
|
## Bias, Risks and Limitations |
|
|
|
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* |
|
--> |
|
|
|
<!-- |
|
### Recommendations |
|
|
|
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* |
|
--> |
|
|
|
## Training Details |
|
|
|
### Training Dataset |
|
|
|
#### Unnamed Dataset |
|
|
|
* Size: 156 training samples |
|
* Columns: <code>sentence_0</code> and <code>sentence_1</code> |
|
* Approximate statistics based on the first 156 samples: |
|
| | sentence_0 | sentence_1 | |
|
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| |
|
| type | string | string | |
|
| details | <ul><li>min: 14 tokens</li><li>mean: 20.66 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 130.44 tokens</li><li>max: 204 tokens</li></ul> | |
|
* Samples: |
|
| sentence_0 | sentence_1 | |
|
|:----------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |
|
| <code>What key themes and pivotal moments in the field of Large Language Models were identified in 2024?</code> | <code>Things we learned about LLMs in 2024<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willison’s Weblog<br>Subscribe<br><br><br><br><br><br><br>Things we learned about LLMs in 2024<br>31st December 2024<br>A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.<br>This is a sequel to my review of 2023.<br>In this article:</code> | |
|
| <code>How does the review of 2024 compare to the previous year's review of 2023?</code> | <code>Things we learned about LLMs in 2024<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willison’s Weblog<br>Subscribe<br><br><br><br><br><br><br>Things we learned about LLMs in 2024<br>31st December 2024<br>A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.<br>This is a sequel to my review of 2023.<br>In this article:</code> | |
|
| <code>What advancements have been made in multimodal vision and audio/video capabilities in LLMs?</code> | <code>The GPT-4 barrier was comprehensively broken<br>Some of those GPT-4 models run on my laptop<br>LLM prices crashed, thanks to competition and increased efficiency<br>Multimodal vision is common, audio and video are starting to emerge<br>Voice and live camera mode are science fiction come to life<br>Prompt driven app generation is a commodity already<br>Universal access to the best models lasted for just a few short months<br>“Agents” still haven’t really happened yet<br>Evals really matter<br>Apple Intelligence is bad, Apple’s MLX library is excellent<br>The rise of inference-scaling “reasoning” models<br>Was the best currently available LLM trained in China for less than $6m?<br>The environmental impact got better<br>The environmental impact got much, much worse</code> | |
|
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: |
|
```json |
|
{ |
|
"loss": "MultipleNegativesRankingLoss", |
|
"matryoshka_dims": [ |
|
768, |
|
512, |
|
256, |
|
128, |
|
64 |
|
], |
|
"matryoshka_weights": [ |
|
1, |
|
1, |
|
1, |
|
1, |
|
1 |
|
], |
|
"n_dims_per_step": -1 |
|
} |
|
``` |
|
|
|
### Training Hyperparameters |
|
#### Non-Default Hyperparameters |
|
|
|
- `eval_strategy`: steps |
|
- `per_device_train_batch_size`: 10 |
|
- `per_device_eval_batch_size`: 10 |
|
- `num_train_epochs`: 10 |
|
- `multi_dataset_batch_sampler`: round_robin |
|
|
|
#### All Hyperparameters |
|
<details><summary>Click to expand</summary> |
|
|
|
- `overwrite_output_dir`: False |
|
- `do_predict`: False |
|
- `eval_strategy`: steps |
|
- `prediction_loss_only`: True |
|
- `per_device_train_batch_size`: 10 |
|
- `per_device_eval_batch_size`: 10 |
|
- `per_gpu_train_batch_size`: None |
|
- `per_gpu_eval_batch_size`: None |
|
- `gradient_accumulation_steps`: 1 |
|
- `eval_accumulation_steps`: None |
|
- `torch_empty_cache_steps`: None |
|
- `learning_rate`: 5e-05 |
|
- `weight_decay`: 0.0 |
|
- `adam_beta1`: 0.9 |
|
- `adam_beta2`: 0.999 |
|
- `adam_epsilon`: 1e-08 |
|
- `max_grad_norm`: 1 |
|
- `num_train_epochs`: 10 |
|
- `max_steps`: -1 |
|
- `lr_scheduler_type`: linear |
|
- `lr_scheduler_kwargs`: {} |
|
- `warmup_ratio`: 0.0 |
|
- `warmup_steps`: 0 |
|
- `log_level`: passive |
|
- `log_level_replica`: warning |
|
- `log_on_each_node`: True |
|
- `logging_nan_inf_filter`: True |
|
- `save_safetensors`: True |
|
- `save_on_each_node`: False |
|
- `save_only_model`: False |
|
- `restore_callback_states_from_checkpoint`: False |
|
- `no_cuda`: False |
|
- `use_cpu`: False |
|
- `use_mps_device`: False |
|
- `seed`: 42 |
|
- `data_seed`: None |
|
- `jit_mode_eval`: False |
|
- `use_ipex`: False |
|
- `bf16`: False |
|
- `fp16`: False |
|
- `fp16_opt_level`: O1 |
|
- `half_precision_backend`: auto |
|
- `bf16_full_eval`: False |
|
- `fp16_full_eval`: False |
|
- `tf32`: None |
|
- `local_rank`: 0 |
|
- `ddp_backend`: None |
|
- `tpu_num_cores`: None |
|
- `tpu_metrics_debug`: False |
|
- `debug`: [] |
|
- `dataloader_drop_last`: False |
|
- `dataloader_num_workers`: 0 |
|
- `dataloader_prefetch_factor`: None |
|
- `past_index`: -1 |
|
- `disable_tqdm`: False |
|
- `remove_unused_columns`: True |
|
- `label_names`: None |
|
- `load_best_model_at_end`: False |
|
- `ignore_data_skip`: False |
|
- `fsdp`: [] |
|
- `fsdp_min_num_params`: 0 |
|
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} |
|
- `fsdp_transformer_layer_cls_to_wrap`: None |
|
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} |
|
- `deepspeed`: None |
|
- `label_smoothing_factor`: 0.0 |
|
- `optim`: adamw_torch |
|
- `optim_args`: None |
|
- `adafactor`: False |
|
- `group_by_length`: False |
|
- `length_column_name`: length |
|
- `ddp_find_unused_parameters`: None |
|
- `ddp_bucket_cap_mb`: None |
|
- `ddp_broadcast_buffers`: False |
|
- `dataloader_pin_memory`: True |
|
- `dataloader_persistent_workers`: False |
|
- `skip_memory_metrics`: True |
|
- `use_legacy_prediction_loop`: False |
|
- `push_to_hub`: False |
|
- `resume_from_checkpoint`: None |
|
- `hub_model_id`: None |
|
- `hub_strategy`: every_save |
|
- `hub_private_repo`: None |
|
- `hub_always_push`: False |
|
- `gradient_checkpointing`: False |
|
- `gradient_checkpointing_kwargs`: None |
|
- `include_inputs_for_metrics`: False |
|
- `include_for_metrics`: [] |
|
- `eval_do_concat_batches`: True |
|
- `fp16_backend`: auto |
|
- `push_to_hub_model_id`: None |
|
- `push_to_hub_organization`: None |
|
- `mp_parameters`: |
|
- `auto_find_batch_size`: False |
|
- `full_determinism`: False |
|
- `torchdynamo`: None |
|
- `ray_scope`: last |
|
- `ddp_timeout`: 1800 |
|
- `torch_compile`: False |
|
- `torch_compile_backend`: None |
|
- `torch_compile_mode`: None |
|
- `dispatch_batches`: None |
|
- `split_batches`: None |
|
- `include_tokens_per_second`: False |
|
- `include_num_input_tokens_seen`: False |
|
- `neftune_noise_alpha`: None |
|
- `optim_target_modules`: None |
|
- `batch_eval_metrics`: False |
|
- `eval_on_start`: False |
|
- `use_liger_kernel`: False |
|
- `eval_use_gather_object`: False |
|
- `average_tokens_across_devices`: False |
|
- `prompts`: None |
|
- `batch_sampler`: batch_sampler |
|
- `multi_dataset_batch_sampler`: round_robin |
|
|
|
</details> |
|
|
|
### Training Logs |
|
| Epoch | Step | cosine_ndcg@10 | |
|
|:-----:|:----:|:--------------:| |
|
| 1.0 | 16 | 0.8830 | |
|
| 2.0 | 32 | 0.9129 | |
|
| 3.0 | 48 | 0.8994 | |
|
| 3.125 | 50 | 0.8994 | |
|
| 4.0 | 64 | 0.9231 | |
|
| 5.0 | 80 | 0.9231 | |
|
| 6.0 | 96 | 0.9231 | |
|
| 6.25 | 100 | 0.9231 | |
|
| 7.0 | 112 | 0.9231 | |
|
| 8.0 | 128 | 0.9231 | |
|
| 9.0 | 144 | 0.9231 | |
|
| 9.375 | 150 | 0.9231 | |
|
| 10.0 | 160 | 0.9231 | |
|
|
|
|
|
### Framework Versions |
|
- Python: 3.11.11 |
|
- Sentence Transformers: 3.4.1 |
|
- Transformers: 4.48.2 |
|
- PyTorch: 2.5.1+cu124 |
|
- Accelerate: 1.3.0 |
|
- Datasets: 3.2.0 |
|
- Tokenizers: 0.21.0 |
|
|
|
## Citation |
|
|
|
### BibTeX |
|
|
|
#### Sentence Transformers |
|
```bibtex |
|
@inproceedings{reimers-2019-sentence-bert, |
|
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", |
|
author = "Reimers, Nils and Gurevych, Iryna", |
|
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", |
|
month = "11", |
|
year = "2019", |
|
publisher = "Association for Computational Linguistics", |
|
url = "https://arxiv.org/abs/1908.10084", |
|
} |
|
``` |
|
|
|
#### MatryoshkaLoss |
|
```bibtex |
|
@misc{kusupati2024matryoshka, |
|
title={Matryoshka Representation Learning}, |
|
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, |
|
year={2024}, |
|
eprint={2205.13147}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.LG} |
|
} |
|
``` |
|
|
|
#### MultipleNegativesRankingLoss |
|
```bibtex |
|
@misc{henderson2017efficient, |
|
title={Efficient Natural Language Response Suggestion for Smart Reply}, |
|
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, |
|
year={2017}, |
|
eprint={1705.00652}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |
|
|
|
<!-- |
|
## Glossary |
|
|
|
*Clearly define terms in order to be accessible across audiences.* |
|
--> |
|
|
|
<!-- |
|
## Model Card Authors |
|
|
|
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* |
|
--> |
|
|
|
<!-- |
|
## Model Card Contact |
|
|
|
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* |
|
--> |