metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:156
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: >-
In what year does the author expect the prompt-driven custom interface
feature to be widely integrated into products?
sentences:
- >-
17th: AI for Data Journalism: demonstrating what we can do with this
stuff right now
22nd: Options for accessing Llama 3 from the terminal using LLM
May
8th: Slop is the new name for unwanted AI-generated content
15th: ChatGPT in “4o” mode is not running the new features yet
29th: Training is not the same as chatting: ChatGPT and other LLMs don’t
remember everything you say
June
6th: Accidental prompt injection against RAG applications
10th: Thoughts on the WWDC 2024 keynote on Apple Intelligence
17th: Language models on the command-line
21st: Building search-based RAG using Claude, Datasette and Val Town
27th: Open challenges for AI engineering
July
14th: Imitation Intelligence, my keynote for PyCon US 2024
- >-
This prompt-driven custom interface feature is so powerful and easy to
build (once you’ve figured out the gnarly details of browser sandboxing)
that I expect it to show up as a feature in a wide range of products in
2025.
Universal access to the best models lasted for just a few short months
For a few short months this year all three of the best available
models—GPT-4o, Claude 3.5 Sonnet and Gemini 1.5 Pro—were freely
available to most of the world.
- >-
Terminology aside, I remain skeptical as to their utility based, once
again, on the challenge of gullibility. LLMs believe anything you tell
them. Any systems that attempts to make meaningful decisions on your
behalf will run into the same roadblock: how good is a travel agent, or
a digital assistant, or even a research tool if it can’t distinguish
truth from fiction?
Just the other day Google Search was caught serving up an entirely fake
description of the non-existant movie “Encanto 2”. It turned out to be
summarizing an imagined movie listing from a fan fiction wiki.
- source_sentence: >-
What notable development in LLM technology occurred in the final quarter
of 2024?
sentences:
- >-
The models may have got more capable, but most of the limitations
remained the same. OpenAI’s o1 may finally be able to (mostly) count the
Rs in strawberry, but its abilities are still limited by its nature as
an LLM and the constraints placed on it by the harness it’s running in.
o1 can’t run web searches or use Code Interpreter, but GPT-4o can—both
in that same ChatGPT UI. (o1 will pretend to do those things if you ask
it to, a regression to the URL hallucinations bug from early 2023).
What are we doing about this? Not much. Most users are thrown in at the
deep end. The default LLM chat UI is like taking brand new computer
users, dropping them into a Linux terminal and expecting them to figure
it all out.
- >-
Now that those features are rolling out they’re pretty weak. As an LLM
power-user I know what these models are capable of, and Apple’s LLM
features offer a pale imitation of what a frontier LLM can do. Instead
we’re getting notification summaries that misrepresent news headlines
and writing assistant tools that I’ve not found useful at all. Genmoji
are kind of fun though.
The rise of inference-scaling “reasoning” models
The most interesting development in the final quarter of 2024 was the
introduction of a new shape of LLM, exemplified by OpenAI’s o1
models—initially released as o1-preview and o1-mini on September 12th.
- >-
Against this photo of butterflies at the California Academy of Sciences:
A shallow dish, likely a hummingbird or butterfly feeder, is red.
Pieces of orange slices of fruit are visible inside the dish.
Two butterflies are positioned in the feeder, one is a dark brown/black
butterfly with white/cream-colored markings. The other is a large,
brown butterfly with patterns of lighter brown, beige, and black
markings, including prominent eye spots. The larger brown butterfly
appears to be feeding on the fruit.
- source_sentence: What is the license under which Alibaba's QwQ model was released?
sentences:
- >-
The most recent twist, again from December (December was a lot) is live
video. ChatGPT voice mode now provides the option to share your camera
feed with the model and talk about what you can see in real time. Google
Gemini have a preview of the same feature, which they managed to ship
the day before ChatGPT did.
- >-
OpenAI are not the only game in town here. Google released their first
entrant in the category, gemini-2.0-flash-thinking-exp, on December
19th.
Alibaba’s Qwen team released their QwQ model on November 28th—under an
Apache 2.0 license, and that one I could run on my own machine. They
followed that up with a vision reasoning model called QvQ on December
24th, which I also ran locally.
DeepSeek made their DeepSeek-R1-Lite-Preview model available to try out
through their chat interface on November 20th.
To understand more about inference scaling I recommend Is AI progress
slowing down? by Arvind Narayanan and Sayash Kapoor.
- >-
Stuff we figured out about AI in 2023
Simon Willison’s Weblog
Subscribe
Stuff we figured out about AI in 2023
31st December 2023
2023 was the breakthrough year for Large Language Models (LLMs). I think
it’s OK to call these AI—they’re the latest and (currently) most
interesting development in the academic field of Artificial Intelligence
that dates back to the 1950s.
Here’s my attempt to round up the highlights in one place!
- source_sentence: >-
What is the significance of the cost reduction mentioned in the context
regarding LLMs in 2024?
sentences:
- >-
I think people who complain that LLM improvement has slowed are often
missing the enormous advances in these multi-modal models. Being able to
run prompts against images (and audio and video) is a fascinating new
way to apply these models.
Voice and live camera mode are science fiction come to life
The audio and live video modes that have started to emerge deserve a
special mention.
The ability to talk to ChatGPT first arrived in September 2023, but it
was mostly an illusion: OpenAI used their excellent Whisper
speech-to-text model and a new text-to-speech model (creatively named
tts-1) to enable conversations with the ChatGPT mobile apps, but the
actual model just saw text.
- >-
I like people who are skeptical of this stuff. The hype has been
deafening for more than two years now, and there are enormous quantities
of snake oil and misinformation out there. A lot of very bad decisions
are being made based on that hype. Being critical is a virtue.
If we want people with decision-making authority to make good decisions
about how to apply these tools we first need to acknowledge that there
ARE good applications, and then help explain how to put those into
practice while avoiding the many unintiutive traps.
(If you still don’t think there are any good applications at all I’m not
sure why you made it to this point in the article!)
- >-
260 input tokens, 92 output tokens. Cost approximately 0.0024 cents
(that’s less than a 400th of a cent).
This increase in efficiency and reduction in price is my single
favourite trend from 2024. I want the utility of LLMs at a fraction of
the energy cost and it looks like that’s what we’re getting.
Multimodal vision is common, audio and video are starting to emerge
My butterfly example above illustrates another key trend from 2024: the
rise of multi-modal LLMs.
A year ago the single most notable example of these was GPT-4 Vision,
released at OpenAI’s DevDay in November 2023. Google’s multi-modal
Gemini 1.0 was announced on December 7th 2023 so it also (just) makes it
into the 2023 window.
- source_sentence: >-
How does the author feel about their choice of platform as a Mac user this
year compared to last year?
sentences:
- >-
I’m still trying to figure out the best patterns for doing this for my
own work. Everyone knows that evals are important, but there remains a
lack of great guidance for how to best implement them—I’m tracking this
under my evals tag. My SVG pelican riding a bicycle benchmark is a pale
imitation of what a real eval suite should look like.
Apple Intelligence is bad, Apple’s MLX library is excellent
As a Mac user I’ve been feeling a lot better about my choice of platform
this year.
Last year it felt like my lack of a Linux/Windows machine with an
NVIDIA GPU was a huge disadvantage in terms of trying out new models.
- |-
The GPT-4 barrier was comprehensively broken
Some of those GPT-4 models run on my laptop
LLM prices crashed, thanks to competition and increased efficiency
Multimodal vision is common, audio and video are starting to emerge
Voice and live camera mode are science fiction come to life
Prompt driven app generation is a commodity already
Universal access to the best models lasted for just a few short months
“Agents” still haven’t really happened yet
Evals really matter
Apple Intelligence is bad, Apple’s MLX library is excellent
The rise of inference-scaling “reasoning” models
Was the best currently available LLM trained in China for less than $6m?
The environmental impact got better
The environmental impact got much, much worse
- >-
Structured and Gradual Learning. In organic datasets, the relationship
between tokens is often complex and indirect. Many reasoning steps may
be required to connect the current token to the next, making it
challenging for the model to learn effectively from next-token
prediction. By contrast, each token generated by a language model is by
definition predicted by the preceding tokens, making it easier for a
model to follow the resulting reasoning patterns.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.7916666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7916666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7916666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 1
name: Cosine Recall@3
- type: cosine_recall@5
value: 1
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.923110365327387
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8958333333333334
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8958333333333334
name: Cosine Map@100
SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Snowflake/snowflake-arctic-embed-l
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("thomfoolery/legal-ft-v0")
# Run inference
sentences = [
'How does the author feel about their choice of platform as a Mac user this year compared to last year?',
'I’m still trying to figure out the best patterns for doing this for my own work. Everyone knows that evals are important, but there remains a lack of great guidance for how to best implement them—I’m tracking this under my evals tag. My SVG pelican riding a bicycle benchmark is a pale imitation of what a real eval suite should look like.\nApple Intelligence is bad, Apple’s MLX library is excellent\nAs a Mac user I’ve been feeling a lot better about my choice of platform this year.\nLast year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU was a huge disadvantage in terms of trying out new models.',
'Structured and Gradual Learning. In organic datasets, the relationship between tokens is often complex and indirect. Many reasoning steps may be required to connect the current token to the next, making it challenging for the model to learn effectively from next-token prediction. By contrast, each token generated by a language model is by definition predicted by the preceding tokens, making it easier for a model to follow the resulting reasoning patterns.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.7917 |
cosine_accuracy@3 | 1.0 |
cosine_accuracy@5 | 1.0 |
cosine_accuracy@10 | 1.0 |
cosine_precision@1 | 0.7917 |
cosine_precision@3 | 0.3333 |
cosine_precision@5 | 0.2 |
cosine_precision@10 | 0.1 |
cosine_recall@1 | 0.7917 |
cosine_recall@3 | 1.0 |
cosine_recall@5 | 1.0 |
cosine_recall@10 | 1.0 |
cosine_ndcg@10 | 0.9231 |
cosine_mrr@10 | 0.8958 |
cosine_map@100 | 0.8958 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 156 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 156 samples:
sentence_0 sentence_1 type string string details - min: 14 tokens
- mean: 20.66 tokens
- max: 35 tokens
- min: 43 tokens
- mean: 130.44 tokens
- max: 204 tokens
- Samples:
sentence_0 sentence_1 What key themes and pivotal moments in the field of Large Language Models were identified in 2024?
Things we learned about LLMs in 2024
Simon Willison’s Weblog
Subscribe
Things we learned about LLMs in 2024
31st December 2024
A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.
This is a sequel to my review of 2023.
In this article:How does the review of 2024 compare to the previous year's review of 2023?
Things we learned about LLMs in 2024
Simon Willison’s Weblog
Subscribe
Things we learned about LLMs in 2024
31st December 2024
A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.
This is a sequel to my review of 2023.
In this article:What advancements have been made in multimodal vision and audio/video capabilities in LLMs?
The GPT-4 barrier was comprehensively broken
Some of those GPT-4 models run on my laptop
LLM prices crashed, thanks to competition and increased efficiency
Multimodal vision is common, audio and video are starting to emerge
Voice and live camera mode are science fiction come to life
Prompt driven app generation is a commodity already
Universal access to the best models lasted for just a few short months
“Agents” still haven’t really happened yet
Evals really matter
Apple Intelligence is bad, Apple’s MLX library is excellent
The rise of inference-scaling “reasoning” models
Was the best currently available LLM trained in China for less than $6m?
The environmental impact got better
The environmental impact got much, much worse - Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 10per_device_eval_batch_size
: 10num_train_epochs
: 10multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 10per_device_eval_batch_size
: 10per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 10max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | cosine_ndcg@10 |
---|---|---|
1.0 | 16 | 0.8830 |
2.0 | 32 | 0.9129 |
3.0 | 48 | 0.8994 |
3.125 | 50 | 0.8994 |
4.0 | 64 | 0.9231 |
5.0 | 80 | 0.9231 |
6.0 | 96 | 0.9231 |
6.25 | 100 | 0.9231 |
7.0 | 112 | 0.9231 |
8.0 | 128 | 0.9231 |
9.0 | 144 | 0.9231 |
9.375 | 150 | 0.9231 |
10.0 | 160 | 0.9231 |
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}