christinemahler commited on
Commit
b563d62
·
verified ·
1 Parent(s): 171bdfc

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,723 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:156
8
+ - loss:MatryoshkaLoss
9
+ - loss:MultipleNegativesRankingLoss
10
+ base_model: Snowflake/snowflake-arctic-embed-l
11
+ widget:
12
+ - source_sentence: What notable development in LLM technology occurred in the final
13
+ quarter of 2024?
14
+ sentences:
15
+ - '17th: AI for Data Journalism: demonstrating what we can do with this stuff right
16
+ now
17
+
18
+
19
+ 22nd: Options for accessing Llama 3 from the terminal using LLM
20
+
21
+
22
+
23
+
24
+ May
25
+
26
+
27
+ 8th: Slop is the new name for unwanted AI-generated content
28
+
29
+
30
+ 15th: ChatGPT in “4o” mode is not running the new features yet
31
+
32
+
33
+ 29th: Training is not the same as chatting: ChatGPT and other LLMs don’t remember
34
+ everything you say
35
+
36
+
37
+
38
+
39
+ June
40
+
41
+
42
+ 6th: Accidental prompt injection against RAG applications
43
+
44
+
45
+ 10th: Thoughts on the WWDC 2024 keynote on Apple Intelligence
46
+
47
+
48
+ 17th: Language models on the command-line
49
+
50
+
51
+ 21st: Building search-based RAG using Claude, Datasette and Val Town
52
+
53
+
54
+ 27th: Open challenges for AI engineering
55
+
56
+
57
+
58
+
59
+ July
60
+
61
+
62
+ 14th: Imitation Intelligence, my keynote for PyCon US 2024'
63
+ - 'Now that those features are rolling out they’re pretty weak. As an LLM power-user
64
+ I know what these models are capable of, and Apple’s LLM features offer a pale
65
+ imitation of what a frontier LLM can do. Instead we’re getting notification summaries
66
+ that misrepresent news headlines and writing assistant tools that I’ve not found
67
+ useful at all. Genmoji are kind of fun though.
68
+
69
+ The rise of inference-scaling “reasoning” models
70
+
71
+ The most interesting development in the final quarter of 2024 was the introduction
72
+ of a new shape of LLM, exemplified by OpenAI’s o1 models—initially released as
73
+ o1-preview and o1-mini on September 12th.'
74
+ - 'Terminology aside, I remain skeptical as to their utility based, once again,
75
+ on the challenge of gullibility. LLMs believe anything you tell them. Any systems
76
+ that attempts to make meaningful decisions on your behalf will run into the same
77
+ roadblock: how good is a travel agent, or a digital assistant, or even a research
78
+ tool if it can’t distinguish truth from fiction?
79
+
80
+ Just the other day Google Search was caught serving up an entirely fake description
81
+ of the non-existant movie “Encanto 2”. It turned out to be summarizing an imagined
82
+ movie listing from a fan fiction wiki.'
83
+ - source_sentence: In what year does the author expect the prompt-driven custom interface
84
+ feature to be widely integrated into products?
85
+ sentences:
86
+ - 'Against this photo of butterflies at the California Academy of Sciences:
87
+
88
+
89
+
90
+ A shallow dish, likely a hummingbird or butterfly feeder, is red. Pieces of orange
91
+ slices of fruit are visible inside the dish.
92
+
93
+ Two butterflies are positioned in the feeder, one is a dark brown/black butterfly
94
+ with white/cream-colored markings. The other is a large, brown butterfly with
95
+ patterns of lighter brown, beige, and black markings, including prominent eye
96
+ spots. The larger brown butterfly appears to be feeding on the fruit.'
97
+ - 'This prompt-driven custom interface feature is so powerful and easy to build
98
+ (once you’ve figured out the gnarly details of browser sandboxing) that I expect
99
+ it to show up as a feature in a wide range of products in 2025.
100
+
101
+ Universal access to the best models lasted for just a few short months
102
+
103
+ For a few short months this year all three of the best available models—GPT-4o,
104
+ Claude 3.5 Sonnet and Gemini 1.5 Pro—were freely available to most of the world.'
105
+ - 'The models may have got more capable, but most of the limitations remained the
106
+ same. OpenAI’s o1 may finally be able to (mostly) count the Rs in strawberry,
107
+ but its abilities are still limited by its nature as an LLM and the constraints
108
+ placed on it by the harness it’s running in. o1 can’t run web searches or use
109
+ Code Interpreter, but GPT-4o can—both in that same ChatGPT UI. (o1 will pretend
110
+ to do those things if you ask it to, a regression to the URL hallucinations bug
111
+ from early 2023).
112
+
113
+ What are we doing about this? Not much. Most users are thrown in at the deep end.
114
+ The default LLM chat UI is like taking brand new computer users, dropping them
115
+ into a Linux terminal and expecting them to figure it all out.'
116
+ - source_sentence: What is the license under which Alibaba's QwQ model was released?
117
+ sentences:
118
+ - The most recent twist, again from December (December was a lot) is live video.
119
+ ChatGPT voice mode now provides the option to share your camera feed with the
120
+ model and talk about what you can see in real time. Google Gemini have a preview
121
+ of the same feature, which they managed to ship the day before ChatGPT did.
122
+ - 'OpenAI are not the only game in town here. Google released their first entrant
123
+ in the category, gemini-2.0-flash-thinking-exp, on December 19th.
124
+
125
+ Alibaba’s Qwen team released their QwQ model on November 28th—under an Apache
126
+ 2.0 license, and that one I could run on my own machine. They followed that up
127
+ with a vision reasoning model called QvQ on December 24th, which I also ran locally.
128
+
129
+ DeepSeek made their DeepSeek-R1-Lite-Preview model available to try out through
130
+ their chat interface on November 20th.
131
+
132
+ To understand more about inference scaling I recommend Is AI progress slowing
133
+ down? by Arvind Narayanan and Sayash Kapoor.'
134
+ - 'Stuff we figured out about AI in 2023
135
+
136
+
137
+
138
+
139
+
140
+
141
+
142
+
143
+
144
+
145
+
146
+
147
+
148
+
149
+
150
+
151
+
152
+
153
+
154
+
155
+
156
+
157
+ Simon Willison’s Weblog
158
+
159
+ Subscribe
160
+
161
+
162
+
163
+
164
+
165
+
166
+
167
+ Stuff we figured out about AI in 2023
168
+
169
+ 31st December 2023
170
+
171
+ 2023 was the breakthrough year for Large Language Models (LLMs). I think it’s
172
+ OK to call these AI—they’re the latest and (currently) most interesting development
173
+ in the academic field of Artificial Intelligence that dates back to the 1950s.
174
+
175
+ Here’s my attempt to round up the highlights in one place!'
176
+ - source_sentence: What is the significance of the cost reduction mentioned in the
177
+ context regarding LLMs in 2024?
178
+ sentences:
179
+ - 'I think people who complain that LLM improvement has slowed are often missing
180
+ the enormous advances in these multi-modal models. Being able to run prompts against
181
+ images (and audio and video) is a fascinating new way to apply these models.
182
+
183
+ Voice and live camera mode are science fiction come to life
184
+
185
+ The audio and live video modes that have started to emerge deserve a special mention.
186
+
187
+ The ability to talk to ChatGPT first arrived in September 2023, but it was mostly
188
+ an illusion: OpenAI used their excellent Whisper speech-to-text model and a new
189
+ text-to-speech model (creatively named tts-1) to enable conversations with the
190
+ ChatGPT mobile apps, but the actual model just saw text.'
191
+ - 'I like people who are skeptical of this stuff. The hype has been deafening for
192
+ more than two years now, and there are enormous quantities of snake oil and misinformation
193
+ out there. A lot of very bad decisions are being made based on that hype. Being
194
+ critical is a virtue.
195
+
196
+ If we want people with decision-making authority to make good decisions about
197
+ how to apply these tools we first need to acknowledge that there ARE good applications,
198
+ and then help explain how to put those into practice while avoiding the many unintiutive
199
+ traps.
200
+
201
+ (If you still don’t think there are any good applications at all I’m not sure
202
+ why you made it to this point in the article!)'
203
+ - '260 input tokens, 92 output tokens. Cost approximately 0.0024 cents (that’s less
204
+ than a 400th of a cent).
205
+
206
+ This increase in efficiency and reduction in price is my single favourite trend
207
+ from 2024. I want the utility of LLMs at a fraction of the energy cost and it
208
+ looks like that’s what we’re getting.
209
+
210
+ Multimodal vision is common, audio and video are starting to emerge
211
+
212
+ My butterfly example above illustrates another key trend from 2024: the rise of
213
+ multi-modal LLMs.
214
+
215
+ A year ago the single most notable example of these was GPT-4 Vision, released
216
+ at OpenAI’s DevDay in November 2023. Google’s multi-modal Gemini 1.0 was announced
217
+ on December 7th 2023 so it also (just) makes it into the 2023 window.'
218
+ - source_sentence: How does the author feel about their choice of platform this year
219
+ compared to last year?
220
+ sentences:
221
+ - 'I’m still trying to figure out the best patterns for doing this for my own work.
222
+ Everyone knows that evals are important, but there remains a lack of great guidance
223
+ for how to best implement them—I’m tracking this under my evals tag. My SVG pelican
224
+ riding a bicycle benchmark is a pale imitation of what a real eval suite should
225
+ look like.
226
+
227
+ Apple Intelligence is bad, Apple’s MLX library is excellent
228
+
229
+ As a Mac user I’ve been feeling a lot better about my choice of platform this
230
+ year.
231
+
232
+ Last year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU
233
+ was a huge disadvantage in terms of trying out new models.'
234
+ - 'The GPT-4 barrier was comprehensively broken
235
+
236
+ Some of those GPT-4 models run on my laptop
237
+
238
+ LLM prices crashed, thanks to competition and increased efficiency
239
+
240
+ Multimodal vision is common, audio and video are starting to emerge
241
+
242
+ Voice and live camera mode are science fiction come to life
243
+
244
+ Prompt driven app generation is a commodity already
245
+
246
+ Universal access to the best models lasted for just a few short months
247
+
248
+ “Agents” still haven’t really happened yet
249
+
250
+ Evals really matter
251
+
252
+ Apple Intelligence is bad, Apple’s MLX library is excellent
253
+
254
+ The rise of inference-scaling “reasoning” models
255
+
256
+ Was the best currently available LLM trained in China for less than $6m?
257
+
258
+ The environmental impact got better
259
+
260
+ The environmental impact got much, much worse'
261
+ - Structured and Gradual Learning. In organic datasets, the relationship between
262
+ tokens is often complex and indirect. Many reasoning steps may be required to
263
+ connect the current token to the next, making it challenging for the model to
264
+ learn effectively from next-token prediction. By contrast, each token generated
265
+ by a language model is by definition predicted by the preceding tokens, making
266
+ it easier for a model to follow the resulting reasoning patterns.
267
+ pipeline_tag: sentence-similarity
268
+ library_name: sentence-transformers
269
+ metrics:
270
+ - cosine_accuracy@1
271
+ - cosine_accuracy@3
272
+ - cosine_accuracy@5
273
+ - cosine_accuracy@10
274
+ - cosine_precision@1
275
+ - cosine_precision@3
276
+ - cosine_precision@5
277
+ - cosine_precision@10
278
+ - cosine_recall@1
279
+ - cosine_recall@3
280
+ - cosine_recall@5
281
+ - cosine_recall@10
282
+ - cosine_ndcg@10
283
+ - cosine_mrr@10
284
+ - cosine_map@100
285
+ model-index:
286
+ - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
287
+ results:
288
+ - task:
289
+ type: information-retrieval
290
+ name: Information Retrieval
291
+ dataset:
292
+ name: Unknown
293
+ type: unknown
294
+ metrics:
295
+ - type: cosine_accuracy@1
296
+ value: 0.7916666666666666
297
+ name: Cosine Accuracy@1
298
+ - type: cosine_accuracy@3
299
+ value: 1.0
300
+ name: Cosine Accuracy@3
301
+ - type: cosine_accuracy@5
302
+ value: 1.0
303
+ name: Cosine Accuracy@5
304
+ - type: cosine_accuracy@10
305
+ value: 1.0
306
+ name: Cosine Accuracy@10
307
+ - type: cosine_precision@1
308
+ value: 0.7916666666666666
309
+ name: Cosine Precision@1
310
+ - type: cosine_precision@3
311
+ value: 0.3333333333333333
312
+ name: Cosine Precision@3
313
+ - type: cosine_precision@5
314
+ value: 0.20000000000000004
315
+ name: Cosine Precision@5
316
+ - type: cosine_precision@10
317
+ value: 0.10000000000000002
318
+ name: Cosine Precision@10
319
+ - type: cosine_recall@1
320
+ value: 0.7916666666666666
321
+ name: Cosine Recall@1
322
+ - type: cosine_recall@3
323
+ value: 1.0
324
+ name: Cosine Recall@3
325
+ - type: cosine_recall@5
326
+ value: 1.0
327
+ name: Cosine Recall@5
328
+ - type: cosine_recall@10
329
+ value: 1.0
330
+ name: Cosine Recall@10
331
+ - type: cosine_ndcg@10
332
+ value: 0.9121995525297656
333
+ name: Cosine Ndcg@10
334
+ - type: cosine_mrr@10
335
+ value: 0.8819444444444443
336
+ name: Cosine Mrr@10
337
+ - type: cosine_map@100
338
+ value: 0.8819444444444443
339
+ name: Cosine Map@100
340
+ ---
341
+
342
+ # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
343
+
344
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
345
+
346
+ ## Model Details
347
+
348
+ ### Model Description
349
+ - **Model Type:** Sentence Transformer
350
+ - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
351
+ - **Maximum Sequence Length:** 512 tokens
352
+ - **Output Dimensionality:** 1024 dimensions
353
+ - **Similarity Function:** Cosine Similarity
354
+ <!-- - **Training Dataset:** Unknown -->
355
+ <!-- - **Language:** Unknown -->
356
+ <!-- - **License:** Unknown -->
357
+
358
+ ### Model Sources
359
+
360
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
361
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
362
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
363
+
364
+ ### Full Model Architecture
365
+
366
+ ```
367
+ SentenceTransformer(
368
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
369
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
370
+ (2): Normalize()
371
+ )
372
+ ```
373
+
374
+ ## Usage
375
+
376
+ ### Direct Usage (Sentence Transformers)
377
+
378
+ First install the Sentence Transformers library:
379
+
380
+ ```bash
381
+ pip install -U sentence-transformers
382
+ ```
383
+
384
+ Then you can load this model and run inference.
385
+ ```python
386
+ from sentence_transformers import SentenceTransformer
387
+
388
+ # Download from the 🤗 Hub
389
+ model = SentenceTransformer("christinemahler/legal-ft-v0")
390
+ # Run inference
391
+ sentences = [
392
+ 'How does the author feel about their choice of platform this year compared to last year?',
393
+ 'I’m still trying to figure out the best patterns for doing this for my own work. Everyone knows that evals are important, but there remains a lack of great guidance for how to best implement them—I’m tracking this under my evals tag. My SVG pelican riding a bicycle benchmark is a pale imitation of what a real eval suite should look like.\nApple Intelligence is bad, Apple’s MLX library is excellent\nAs a Mac user I’ve been feeling a lot better about my choice of platform this year.\nLast year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU was a huge disadvantage in terms of trying out new models.',
394
+ 'Structured and Gradual Learning. In organic datasets, the relationship between tokens is often complex and indirect. Many reasoning steps may be required to connect the current token to the next, making it challenging for the model to learn effectively from next-token prediction. By contrast, each token generated by a language model is by definition predicted by the preceding tokens, making it easier for a model to follow the resulting reasoning patterns.',
395
+ ]
396
+ embeddings = model.encode(sentences)
397
+ print(embeddings.shape)
398
+ # [3, 1024]
399
+
400
+ # Get the similarity scores for the embeddings
401
+ similarities = model.similarity(embeddings, embeddings)
402
+ print(similarities.shape)
403
+ # [3, 3]
404
+ ```
405
+
406
+ <!--
407
+ ### Direct Usage (Transformers)
408
+
409
+ <details><summary>Click to see the direct usage in Transformers</summary>
410
+
411
+ </details>
412
+ -->
413
+
414
+ <!--
415
+ ### Downstream Usage (Sentence Transformers)
416
+
417
+ You can finetune this model on your own dataset.
418
+
419
+ <details><summary>Click to expand</summary>
420
+
421
+ </details>
422
+ -->
423
+
424
+ <!--
425
+ ### Out-of-Scope Use
426
+
427
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
428
+ -->
429
+
430
+ ## Evaluation
431
+
432
+ ### Metrics
433
+
434
+ #### Information Retrieval
435
+
436
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
437
+
438
+ | Metric | Value |
439
+ |:--------------------|:-----------|
440
+ | cosine_accuracy@1 | 0.7917 |
441
+ | cosine_accuracy@3 | 1.0 |
442
+ | cosine_accuracy@5 | 1.0 |
443
+ | cosine_accuracy@10 | 1.0 |
444
+ | cosine_precision@1 | 0.7917 |
445
+ | cosine_precision@3 | 0.3333 |
446
+ | cosine_precision@5 | 0.2 |
447
+ | cosine_precision@10 | 0.1 |
448
+ | cosine_recall@1 | 0.7917 |
449
+ | cosine_recall@3 | 1.0 |
450
+ | cosine_recall@5 | 1.0 |
451
+ | cosine_recall@10 | 1.0 |
452
+ | **cosine_ndcg@10** | **0.9122** |
453
+ | cosine_mrr@10 | 0.8819 |
454
+ | cosine_map@100 | 0.8819 |
455
+
456
+ <!--
457
+ ## Bias, Risks and Limitations
458
+
459
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
460
+ -->
461
+
462
+ <!--
463
+ ### Recommendations
464
+
465
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
466
+ -->
467
+
468
+ ## Training Details
469
+
470
+ ### Training Dataset
471
+
472
+ #### Unnamed Dataset
473
+
474
+ * Size: 156 training samples
475
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
476
+ * Approximate statistics based on the first 156 samples:
477
+ | | sentence_0 | sentence_1 |
478
+ |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
479
+ | type | string | string |
480
+ | details | <ul><li>min: 13 tokens</li><li>mean: 20.09 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 130.51 tokens</li><li>max: 204 tokens</li></ul> |
481
+ * Samples:
482
+ | sentence_0 | sentence_1 |
483
+ |:----------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
484
+ | <code>What key themes and pivotal moments in the field of Large Language Models were identified in 2024?</code> | <code>Things we learned about LLMs in 2024<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willison’s Weblog<br>Subscribe<br><br><br><br><br><br><br>Things we learned about LLMs in 2024<br>31st December 2024<br>A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.<br>This is a sequel to my review of 2023.<br>In this article:</code> |
485
+ | <code>How does the review of 2024 compare to the review of 2023 regarding advancements in LLMs?</code> | <code>Things we learned about LLMs in 2024<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willison’s Weblog<br>Subscribe<br><br><br><br><br><br><br>Things we learned about LLMs in 2024<br>31st December 2024<br>A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.<br>This is a sequel to my review of 2023.<br>In this article:</code> |
486
+ | <code>What advancements have been made in multimodal vision and audio/video capabilities in LLMs?</code> | <code>The GPT-4 barrier was comprehensively broken<br>Some of those GPT-4 models run on my laptop<br>LLM prices crashed, thanks to competition and increased efficiency<br>Multimodal vision is common, audio and video are starting to emerge<br>Voice and live camera mode are science fiction come to life<br>Prompt driven app generation is a commodity already<br>Universal access to the best models lasted for just a few short months<br>“Agents” still haven’t really happened yet<br>Evals really matter<br>Apple Intelligence is bad, Apple’s MLX library is excellent<br>The rise of inference-scaling “reasoning” models<br>Was the best currently available LLM trained in China for less than $6m?<br>The environmental impact got better<br>The environmental impact got much, much worse</code> |
487
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
488
+ ```json
489
+ {
490
+ "loss": "MultipleNegativesRankingLoss",
491
+ "matryoshka_dims": [
492
+ 768,
493
+ 512,
494
+ 256,
495
+ 128,
496
+ 64
497
+ ],
498
+ "matryoshka_weights": [
499
+ 1,
500
+ 1,
501
+ 1,
502
+ 1,
503
+ 1
504
+ ],
505
+ "n_dims_per_step": -1
506
+ }
507
+ ```
508
+
509
+ ### Training Hyperparameters
510
+ #### Non-Default Hyperparameters
511
+
512
+ - `eval_strategy`: steps
513
+ - `per_device_train_batch_size`: 10
514
+ - `per_device_eval_batch_size`: 10
515
+ - `num_train_epochs`: 10
516
+ - `multi_dataset_batch_sampler`: round_robin
517
+
518
+ #### All Hyperparameters
519
+ <details><summary>Click to expand</summary>
520
+
521
+ - `overwrite_output_dir`: False
522
+ - `do_predict`: False
523
+ - `eval_strategy`: steps
524
+ - `prediction_loss_only`: True
525
+ - `per_device_train_batch_size`: 10
526
+ - `per_device_eval_batch_size`: 10
527
+ - `per_gpu_train_batch_size`: None
528
+ - `per_gpu_eval_batch_size`: None
529
+ - `gradient_accumulation_steps`: 1
530
+ - `eval_accumulation_steps`: None
531
+ - `torch_empty_cache_steps`: None
532
+ - `learning_rate`: 5e-05
533
+ - `weight_decay`: 0.0
534
+ - `adam_beta1`: 0.9
535
+ - `adam_beta2`: 0.999
536
+ - `adam_epsilon`: 1e-08
537
+ - `max_grad_norm`: 1
538
+ - `num_train_epochs`: 10
539
+ - `max_steps`: -1
540
+ - `lr_scheduler_type`: linear
541
+ - `lr_scheduler_kwargs`: {}
542
+ - `warmup_ratio`: 0.0
543
+ - `warmup_steps`: 0
544
+ - `log_level`: passive
545
+ - `log_level_replica`: warning
546
+ - `log_on_each_node`: True
547
+ - `logging_nan_inf_filter`: True
548
+ - `save_safetensors`: True
549
+ - `save_on_each_node`: False
550
+ - `save_only_model`: False
551
+ - `restore_callback_states_from_checkpoint`: False
552
+ - `no_cuda`: False
553
+ - `use_cpu`: False
554
+ - `use_mps_device`: False
555
+ - `seed`: 42
556
+ - `data_seed`: None
557
+ - `jit_mode_eval`: False
558
+ - `use_ipex`: False
559
+ - `bf16`: False
560
+ - `fp16`: False
561
+ - `fp16_opt_level`: O1
562
+ - `half_precision_backend`: auto
563
+ - `bf16_full_eval`: False
564
+ - `fp16_full_eval`: False
565
+ - `tf32`: None
566
+ - `local_rank`: 0
567
+ - `ddp_backend`: None
568
+ - `tpu_num_cores`: None
569
+ - `tpu_metrics_debug`: False
570
+ - `debug`: []
571
+ - `dataloader_drop_last`: False
572
+ - `dataloader_num_workers`: 0
573
+ - `dataloader_prefetch_factor`: None
574
+ - `past_index`: -1
575
+ - `disable_tqdm`: False
576
+ - `remove_unused_columns`: True
577
+ - `label_names`: None
578
+ - `load_best_model_at_end`: False
579
+ - `ignore_data_skip`: False
580
+ - `fsdp`: []
581
+ - `fsdp_min_num_params`: 0
582
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
583
+ - `fsdp_transformer_layer_cls_to_wrap`: None
584
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
585
+ - `deepspeed`: None
586
+ - `label_smoothing_factor`: 0.0
587
+ - `optim`: adamw_torch
588
+ - `optim_args`: None
589
+ - `adafactor`: False
590
+ - `group_by_length`: False
591
+ - `length_column_name`: length
592
+ - `ddp_find_unused_parameters`: None
593
+ - `ddp_bucket_cap_mb`: None
594
+ - `ddp_broadcast_buffers`: False
595
+ - `dataloader_pin_memory`: True
596
+ - `dataloader_persistent_workers`: False
597
+ - `skip_memory_metrics`: True
598
+ - `use_legacy_prediction_loop`: False
599
+ - `push_to_hub`: False
600
+ - `resume_from_checkpoint`: None
601
+ - `hub_model_id`: None
602
+ - `hub_strategy`: every_save
603
+ - `hub_private_repo`: None
604
+ - `hub_always_push`: False
605
+ - `gradient_checkpointing`: False
606
+ - `gradient_checkpointing_kwargs`: None
607
+ - `include_inputs_for_metrics`: False
608
+ - `include_for_metrics`: []
609
+ - `eval_do_concat_batches`: True
610
+ - `fp16_backend`: auto
611
+ - `push_to_hub_model_id`: None
612
+ - `push_to_hub_organization`: None
613
+ - `mp_parameters`:
614
+ - `auto_find_batch_size`: False
615
+ - `full_determinism`: False
616
+ - `torchdynamo`: None
617
+ - `ray_scope`: last
618
+ - `ddp_timeout`: 1800
619
+ - `torch_compile`: False
620
+ - `torch_compile_backend`: None
621
+ - `torch_compile_mode`: None
622
+ - `dispatch_batches`: None
623
+ - `split_batches`: None
624
+ - `include_tokens_per_second`: False
625
+ - `include_num_input_tokens_seen`: False
626
+ - `neftune_noise_alpha`: None
627
+ - `optim_target_modules`: None
628
+ - `batch_eval_metrics`: False
629
+ - `eval_on_start`: False
630
+ - `use_liger_kernel`: False
631
+ - `eval_use_gather_object`: False
632
+ - `average_tokens_across_devices`: False
633
+ - `prompts`: None
634
+ - `batch_sampler`: batch_sampler
635
+ - `multi_dataset_batch_sampler`: round_robin
636
+
637
+ </details>
638
+
639
+ ### Training Logs
640
+ | Epoch | Step | cosine_ndcg@10 |
641
+ |:-----:|:----:|:--------------:|
642
+ | 1.0 | 16 | 0.9010 |
643
+ | 2.0 | 32 | 0.9064 |
644
+ | 3.0 | 48 | 0.8856 |
645
+ | 3.125 | 50 | 0.8856 |
646
+ | 4.0 | 64 | 0.9039 |
647
+ | 5.0 | 80 | 0.9067 |
648
+ | 6.0 | 96 | 0.9039 |
649
+ | 6.25 | 100 | 0.9093 |
650
+ | 7.0 | 112 | 0.9122 |
651
+ | 8.0 | 128 | 0.9122 |
652
+ | 9.0 | 144 | 0.9122 |
653
+ | 9.375 | 150 | 0.9122 |
654
+ | 10.0 | 160 | 0.9122 |
655
+
656
+
657
+ ### Framework Versions
658
+ - Python: 3.11.11
659
+ - Sentence Transformers: 3.4.1
660
+ - Transformers: 4.48.3
661
+ - PyTorch: 2.5.1+cu124
662
+ - Accelerate: 1.3.0
663
+ - Datasets: 3.3.0
664
+ - Tokenizers: 0.21.0
665
+
666
+ ## Citation
667
+
668
+ ### BibTeX
669
+
670
+ #### Sentence Transformers
671
+ ```bibtex
672
+ @inproceedings{reimers-2019-sentence-bert,
673
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
674
+ author = "Reimers, Nils and Gurevych, Iryna",
675
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
676
+ month = "11",
677
+ year = "2019",
678
+ publisher = "Association for Computational Linguistics",
679
+ url = "https://arxiv.org/abs/1908.10084",
680
+ }
681
+ ```
682
+
683
+ #### MatryoshkaLoss
684
+ ```bibtex
685
+ @misc{kusupati2024matryoshka,
686
+ title={Matryoshka Representation Learning},
687
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
688
+ year={2024},
689
+ eprint={2205.13147},
690
+ archivePrefix={arXiv},
691
+ primaryClass={cs.LG}
692
+ }
693
+ ```
694
+
695
+ #### MultipleNegativesRankingLoss
696
+ ```bibtex
697
+ @misc{henderson2017efficient,
698
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
699
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
700
+ year={2017},
701
+ eprint={1705.00652},
702
+ archivePrefix={arXiv},
703
+ primaryClass={cs.CL}
704
+ }
705
+ ```
706
+
707
+ <!--
708
+ ## Glossary
709
+
710
+ *Clearly define terms in order to be accessible across audiences.*
711
+ -->
712
+
713
+ <!--
714
+ ## Model Card Authors
715
+
716
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
717
+ -->
718
+
719
+ <!--
720
+ ## Model Card Contact
721
+
722
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
723
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Snowflake/snowflake-arctic-embed-l",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 1024,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 4096,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 24,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.48.3",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.48.3",
5
+ "pytorch": "2.5.1+cu124"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: "
9
+ },
10
+ "default_prompt_name": null,
11
+ "similarity_fn_name": "cosine"
12
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26589c2c9f61508da83d86e13d29a77668ec74d80871235991e0386640b730e0
3
+ size 1336413848
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "max_length": 512,
50
+ "model_max_length": 512,
51
+ "pad_to_multiple_of": null,
52
+ "pad_token": "[PAD]",
53
+ "pad_token_type_id": 0,
54
+ "padding_side": "right",
55
+ "sep_token": "[SEP]",
56
+ "stride": 0,
57
+ "strip_accents": null,
58
+ "tokenize_chinese_chars": true,
59
+ "tokenizer_class": "BertTokenizer",
60
+ "truncation_side": "right",
61
+ "truncation_strategy": "longest_first",
62
+ "unk_token": "[UNK]"
63
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff