Gonalb commited on
Commit
a21f608
·
verified ·
1 Parent(s): f7da2ec

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,667 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:156
8
+ - loss:MatryoshkaLoss
9
+ - loss:MultipleNegativesRankingLoss
10
+ base_model: Snowflake/snowflake-arctic-embed-l
11
+ widget:
12
+ - source_sentence: Why does the author find the term "agents" extremely frustrating?
13
+ sentences:
14
+ - 'We already knew LLMs were spookily good at writing code. If you prompt them right,
15
+ it turns out they can build you a full interactive application using HTML, CSS
16
+ and JavaScript (and tools like React if you wire up some extra supporting build
17
+ mechanisms)—often in a single prompt.
18
+
19
+ Anthropic kicked this idea into high gear when they released Claude Artifacts,
20
+ a groundbreaking new feature that was initially slightly lost in the noise due
21
+ to being described half way through their announcement of the incredible Claude
22
+ 3.5 Sonnet.
23
+
24
+ With Artifacts, Claude can write you an on-demand interactive application and
25
+ then let you use it directly inside the Claude interface.
26
+
27
+ Here’s my Extract URLs app, entirely generated by Claude:'
28
+ - '“Agents” still haven’t really happened yet
29
+
30
+ I find the term “agents” extremely frustrating. It lacks a single, clear and widely
31
+ understood meaning... but the people who use the term never seem to acknowledge
32
+ that.
33
+
34
+ If you tell me that you are building “agents”, you’ve conveyed almost no information
35
+ to me at all. Without reading your mind I have no way of telling which of the
36
+ dozens of possible definitions you are talking about.'
37
+ - 'I love the term “slop” because it so succinctly captures one of the ways we should
38
+ not be using generative AI!
39
+
40
+ Slop was even in the running for Oxford Word of the Year 2024, but it lost to
41
+ brain rot.
42
+
43
+ Synthetic training data works great
44
+
45
+ An idea that surprisingly seems to have stuck in the public consciousness is that
46
+ of “model collapse”. This was first described in the paper The Curse of Recursion:
47
+ Training on Generated Data Makes Models Forget in May 2023, and repeated in Nature
48
+ in July 2024 with the more eye-catching headline AI models collapse when trained
49
+ on recursively generated data.'
50
+ - source_sentence: What paper did Meta publish in December that is relevant to inference-scaling
51
+ models?
52
+ sentences:
53
+ - 'My personal laptop is a 64GB M2 MacBook Pro from 2023. It’s a powerful machine,
54
+ but it’s also nearly two years old now—and crucially it’s the same laptop I’ve
55
+ been using ever since I first ran an LLM on my computer back in March 2023 (see
56
+ Large language models are having their Stable Diffusion moment).
57
+
58
+ That same laptop that could just about run a GPT-3-class model in March last year
59
+ has now run multiple GPT-4 class models! Some of my notes on that:'
60
+ - 'Nothing yet from Anthropic or Meta but I would be very surprised if they don’t
61
+ have their own inference-scaling models in the works. Meta published a relevant
62
+ paper Training Large Language Models to Reason in a Continuous Latent Space in
63
+ December.
64
+
65
+ Was the best currently available LLM trained in China for less than $6m?
66
+
67
+ Not quite, but almost! It does make for a great attention-grabbing headline.
68
+
69
+ The big news to end the year was the release of DeepSeek v3—dropped on Hugging
70
+ Face on Christmas Day without so much as a README file, then followed by documentation
71
+ and a paper the day after that.'
72
+ - 'The GPT-4 barrier was comprehensively broken
73
+
74
+ Some of those GPT-4 models run on my laptop
75
+
76
+ LLM prices crashed, thanks to competition and increased efficiency
77
+
78
+ Multimodal vision is common, audio and video are starting to emerge
79
+
80
+ Voice and live camera mode are science fiction come to life
81
+
82
+ Prompt driven app generation is a commodity already
83
+
84
+ Universal access to the best models lasted for just a few short months
85
+
86
+ “Agents” still haven’t really happened yet
87
+
88
+ Evals really matter
89
+
90
+ Apple Intelligence is bad, Apple’s MLX library is excellent
91
+
92
+ The rise of inference-scaling “reasoning” models
93
+
94
+ Was the best currently available LLM trained in China for less than $6m?
95
+
96
+ The environmental impact got better
97
+
98
+ The environmental impact got much, much worse'
99
+ - source_sentence: How does the performance of the Llama 32 3B model compare to GPT-4
100
+ according to the context?
101
+ sentences:
102
+ - 'I think this means that, as individual users, we don’t need to feel any guilt
103
+ at all for the energy consumed by the vast majority of our prompts. The impact
104
+ is likely neglible compared to driving a car down the street or maybe even watching
105
+ a video on YouTube.
106
+
107
+ Likewise, training. DeepSeek v3 training for less than $6m is a fantastic sign
108
+ that training costs can and should continue to drop.
109
+
110
+ For less efficient models I find it useful to compare their energy usage to commercial
111
+ flights. The largest Llama 3 model cost about the same as a single digit number
112
+ of fully loaded passenger flights from New York to London. That’s certainly not
113
+ nothing, but once trained that model can be used by millions of people at no extra
114
+ training cost.'
115
+ - 'Meta’s Llama 3.2 models deserve a special mention. They may not be GPT-4 class,
116
+ but at 1B and 3B sizes they punch massively above their weight. I run Llama 3.2
117
+ 3B on my iPhone using the free MLC Chat iOS app and it’s a shockingly capable
118
+ model for its tiny (<2GB) size. Try firing it up and asking it for “a plot outline
119
+ of a Netflix Christmas movie where a data journalist falls in love with a local
120
+ ceramacist”. Here’s what I got, at a respectable 20 tokens per second:'
121
+ - 'Prince Canuma’s excellent, fast moving mlx-vlm project brings vision LLMs to
122
+ Apple Silicon as well. I used that recently to run Qwen’s QvQ.
123
+
124
+ While MLX is a game changer, Apple’s own “Apple Intelligence” features have mostly
125
+ been a disappointment. I wrote about their initial announcement in June, and I
126
+ was optimistic that Apple had focused hard on the subset of LLM applications that
127
+ preserve user privacy and minimize the chance of users getting mislead by confusing
128
+ features.'
129
+ - source_sentence: What was introduced by the Chatbot Arena team in December regarding
130
+ user interaction with models?
131
+ sentences:
132
+ - 'The year of slop
133
+
134
+ 2024 was the year that the word "slop" became a term of art. I wrote about this
135
+ in May, expanding on this tweet by @deepfates:'
136
+ - 'The two main categories I see are people who think AI agents are obviously things
137
+ that go and act on your behalf—the travel agent model—and people who think in
138
+ terms of LLMs that have been given access to tools which they can run in a loop
139
+ as part of solving a problem. The term “autonomy” is often thrown into the mix
140
+ too, again without including a clear definition.
141
+
142
+ (I also collected 211 definitions on Twitter a few months ago—here they are in
143
+ Datasette Lite—and had gemini-exp-1206 attempt to summarize them.)
144
+
145
+ Whatever the term may mean, agents still have that feeling of perpetually “coming
146
+ soon”.'
147
+ - 'Then in December, the Chatbot Arena team introduced a whole new leaderboard for
148
+ this feature, driven by users building the same interactive app twice with two
149
+ different models and voting on the answer. Hard to come up with a more convincing
150
+ argument that this feature is now a commodity that can be effectively implemented
151
+ against all of the leading models.
152
+
153
+ I’ve been tinkering with a version of this myself for my Datasette project, with
154
+ the goal of letting users use prompts to build and iterate on custom widgets and
155
+ data visualizations against their own data. I also figured out a similar pattern
156
+ for writing one-shot Python programs, enabled by uv.'
157
+ - source_sentence: What does the cost of training the DeepSeek v3 model suggest about
158
+ the future of training costs for AI models?
159
+ sentences:
160
+ - 'I think this means that, as individual users, we don’t need to feel any guilt
161
+ at all for the energy consumed by the vast majority of our prompts. The impact
162
+ is likely neglible compared to driving a car down the street or maybe even watching
163
+ a video on YouTube.
164
+
165
+ Likewise, training. DeepSeek v3 training for less than $6m is a fantastic sign
166
+ that training costs can and should continue to drop.
167
+
168
+ For less efficient models I find it useful to compare their energy usage to commercial
169
+ flights. The largest Llama 3 model cost about the same as a single digit number
170
+ of fully loaded passenger flights from New York to London. That’s certainly not
171
+ nothing, but once trained that model can be used by millions of people at no extra
172
+ training cost.'
173
+ - 'There’s still plenty to worry about with respect to the environmental impact
174
+ of the great AI datacenter buildout, but a lot of the concerns over the energy
175
+ cost of individual prompts are no longer credible.
176
+
177
+ Here’s a fun napkin calculation: how much would it cost to generate short descriptions
178
+ of every one of the 68,000 photos in my personal photo library using Google’s
179
+ Gemini 1.5 Flash 8B (released in October), their cheapest model?
180
+
181
+ Each photo would need 260 input tokens and around 100 output tokens.
182
+
183
+ 260 * 68,000 = 17,680,000 input tokens
184
+
185
+ 17,680,000 * $0.0375/million = $0.66
186
+
187
+ 100 * 68,000 = 6,800,000 output tokens
188
+
189
+ 6,800,000 * $0.15/million = $1.02'
190
+ - 'Large Language Models
191
+
192
+ They’re actually quite easy to build
193
+
194
+ You can run LLMs on your own devices
195
+
196
+ Hobbyists can build their own fine-tuned models
197
+
198
+ We don’t yet know how to build GPT-4
199
+
200
+ Vibes Based Development
201
+
202
+ LLMs are really smart, and also really, really dumb
203
+
204
+ Gullibility is the biggest unsolved problem
205
+
206
+ Code may be the best application
207
+
208
+ The ethics of this space remain diabolically complex
209
+
210
+ My blog in 2023'
211
+ pipeline_tag: sentence-similarity
212
+ library_name: sentence-transformers
213
+ metrics:
214
+ - cosine_accuracy@1
215
+ - cosine_accuracy@3
216
+ - cosine_accuracy@5
217
+ - cosine_accuracy@10
218
+ - cosine_precision@1
219
+ - cosine_precision@3
220
+ - cosine_precision@5
221
+ - cosine_precision@10
222
+ - cosine_recall@1
223
+ - cosine_recall@3
224
+ - cosine_recall@5
225
+ - cosine_recall@10
226
+ - cosine_ndcg@10
227
+ - cosine_mrr@10
228
+ - cosine_map@100
229
+ model-index:
230
+ - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
231
+ results:
232
+ - task:
233
+ type: information-retrieval
234
+ name: Information Retrieval
235
+ dataset:
236
+ name: Unknown
237
+ type: unknown
238
+ metrics:
239
+ - type: cosine_accuracy@1
240
+ value: 0.75
241
+ name: Cosine Accuracy@1
242
+ - type: cosine_accuracy@3
243
+ value: 0.9583333333333334
244
+ name: Cosine Accuracy@3
245
+ - type: cosine_accuracy@5
246
+ value: 1.0
247
+ name: Cosine Accuracy@5
248
+ - type: cosine_accuracy@10
249
+ value: 1.0
250
+ name: Cosine Accuracy@10
251
+ - type: cosine_precision@1
252
+ value: 0.75
253
+ name: Cosine Precision@1
254
+ - type: cosine_precision@3
255
+ value: 0.3194444444444444
256
+ name: Cosine Precision@3
257
+ - type: cosine_precision@5
258
+ value: 0.20000000000000004
259
+ name: Cosine Precision@5
260
+ - type: cosine_precision@10
261
+ value: 0.10000000000000002
262
+ name: Cosine Precision@10
263
+ - type: cosine_recall@1
264
+ value: 0.75
265
+ name: Cosine Recall@1
266
+ - type: cosine_recall@3
267
+ value: 0.9583333333333334
268
+ name: Cosine Recall@3
269
+ - type: cosine_recall@5
270
+ value: 1.0
271
+ name: Cosine Recall@5
272
+ - type: cosine_recall@10
273
+ value: 1.0
274
+ name: Cosine Recall@10
275
+ - type: cosine_ndcg@10
276
+ value: 0.8884777424494903
277
+ name: Cosine Ndcg@10
278
+ - type: cosine_mrr@10
279
+ value: 0.8506944444444445
280
+ name: Cosine Mrr@10
281
+ - type: cosine_map@100
282
+ value: 0.8506944444444443
283
+ name: Cosine Map@100
284
+ ---
285
+
286
+ # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
287
+
288
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
289
+
290
+ ## Model Details
291
+
292
+ ### Model Description
293
+ - **Model Type:** Sentence Transformer
294
+ - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
295
+ - **Maximum Sequence Length:** 512 tokens
296
+ - **Output Dimensionality:** 1024 dimensions
297
+ - **Similarity Function:** Cosine Similarity
298
+ <!-- - **Training Dataset:** Unknown -->
299
+ <!-- - **Language:** Unknown -->
300
+ <!-- - **License:** Unknown -->
301
+
302
+ ### Model Sources
303
+
304
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
305
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
306
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
307
+
308
+ ### Full Model Architecture
309
+
310
+ ```
311
+ SentenceTransformer(
312
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
313
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
314
+ (2): Normalize()
315
+ )
316
+ ```
317
+
318
+ ## Usage
319
+
320
+ ### Direct Usage (Sentence Transformers)
321
+
322
+ First install the Sentence Transformers library:
323
+
324
+ ```bash
325
+ pip install -U sentence-transformers
326
+ ```
327
+
328
+ Then you can load this model and run inference.
329
+ ```python
330
+ from sentence_transformers import SentenceTransformer
331
+
332
+ # Download from the 🤗 Hub
333
+ model = SentenceTransformer("Gonalb/legal-ft-v0")
334
+ # Run inference
335
+ sentences = [
336
+ 'What does the cost of training the DeepSeek v3 model suggest about the future of training costs for AI models?',
337
+ 'I think this means that, as individual users, we don’t need to feel any guilt at all for the energy consumed by the vast majority of our prompts. The impact is likely neglible compared to driving a car down the street or maybe even watching a video on YouTube.\nLikewise, training. DeepSeek v3 training for less than $6m is a fantastic sign that training costs can and should continue to drop.\nFor less efficient models I find it useful to compare their energy usage to commercial flights. The largest Llama 3 model cost about the same as a single digit number of fully loaded passenger flights from New York to London. That’s certainly not nothing, but once trained that model can be used by millions of people at no extra training cost.',
338
+ 'Large Language Models\nThey’re actually quite easy to build\nYou can run LLMs on your own devices\nHobbyists can build their own fine-tuned models\nWe don’t yet know how to build GPT-4\nVibes Based Development\nLLMs are really smart, and also really, really dumb\nGullibility is the biggest unsolved problem\nCode may be the best application\nThe ethics of this space remain diabolically complex\nMy blog in 2023',
339
+ ]
340
+ embeddings = model.encode(sentences)
341
+ print(embeddings.shape)
342
+ # [3, 1024]
343
+
344
+ # Get the similarity scores for the embeddings
345
+ similarities = model.similarity(embeddings, embeddings)
346
+ print(similarities.shape)
347
+ # [3, 3]
348
+ ```
349
+
350
+ <!--
351
+ ### Direct Usage (Transformers)
352
+
353
+ <details><summary>Click to see the direct usage in Transformers</summary>
354
+
355
+ </details>
356
+ -->
357
+
358
+ <!--
359
+ ### Downstream Usage (Sentence Transformers)
360
+
361
+ You can finetune this model on your own dataset.
362
+
363
+ <details><summary>Click to expand</summary>
364
+
365
+ </details>
366
+ -->
367
+
368
+ <!--
369
+ ### Out-of-Scope Use
370
+
371
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
372
+ -->
373
+
374
+ ## Evaluation
375
+
376
+ ### Metrics
377
+
378
+ #### Information Retrieval
379
+
380
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
381
+
382
+ | Metric | Value |
383
+ |:--------------------|:-----------|
384
+ | cosine_accuracy@1 | 0.75 |
385
+ | cosine_accuracy@3 | 0.9583 |
386
+ | cosine_accuracy@5 | 1.0 |
387
+ | cosine_accuracy@10 | 1.0 |
388
+ | cosine_precision@1 | 0.75 |
389
+ | cosine_precision@3 | 0.3194 |
390
+ | cosine_precision@5 | 0.2 |
391
+ | cosine_precision@10 | 0.1 |
392
+ | cosine_recall@1 | 0.75 |
393
+ | cosine_recall@3 | 0.9583 |
394
+ | cosine_recall@5 | 1.0 |
395
+ | cosine_recall@10 | 1.0 |
396
+ | **cosine_ndcg@10** | **0.8885** |
397
+ | cosine_mrr@10 | 0.8507 |
398
+ | cosine_map@100 | 0.8507 |
399
+
400
+ <!--
401
+ ## Bias, Risks and Limitations
402
+
403
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
404
+ -->
405
+
406
+ <!--
407
+ ### Recommendations
408
+
409
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
410
+ -->
411
+
412
+ ## Training Details
413
+
414
+ ### Training Dataset
415
+
416
+ #### Unnamed Dataset
417
+
418
+ * Size: 156 training samples
419
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
420
+ * Approximate statistics based on the first 156 samples:
421
+ | | sentence_0 | sentence_1 |
422
+ |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
423
+ | type | string | string |
424
+ | details | <ul><li>min: 13 tokens</li><li>mean: 20.15 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 130.44 tokens</li><li>max: 204 tokens</li></ul> |
425
+ * Samples:
426
+ | sentence_0 | sentence_1 |
427
+ |:-----------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
428
+ | <code>What tools did the author describe in their writing about Claude Artifacts?</code> | <code>I’ve found myself using this a lot. I noticed how much I was relying on it in October and wrote Everything I built with Claude Artifacts this week, describing 14 little tools I had put together in a seven day period.<br>Since then, a whole bunch of other teams have built similar systems. GitHub announced their version of this—GitHub Spark—in October. Mistral Chat added it as a feature called Canvas in November.<br>Steve Krouse from Val Town built a version of it against Cerebras, showcasing how a 2,000 token/second LLM can iterate on an application with changes visible in less than a second.</code> |
429
+ | <code>What is the name of the feature added by Mistral Chat in November?</code> | <code>I’ve found myself using this a lot. I noticed how much I was relying on it in October and wrote Everything I built with Claude Artifacts this week, describing 14 little tools I had put together in a seven day period.<br>Since then, a whole bunch of other teams have built similar systems. GitHub announced their version of this—GitHub Spark—in October. Mistral Chat added it as a feature called Canvas in November.<br>Steve Krouse from Val Town built a version of it against Cerebras, showcasing how a 2,000 token/second LLM can iterate on an application with changes visible in less than a second.</code> |
430
+ | <code>Why does the author find the term "agents" extremely frustrating?</code> | <code>“Agents” still haven’t really happened yet<br>I find the term “agents” extremely frustrating. It lacks a single, clear and widely understood meaning... but the people who use the term never seem to acknowledge that.<br>If you tell me that you are building “agents”, you’ve conveyed almost no information to me at all. Without reading your mind I have no way of telling which of the dozens of possible definitions you are talking about.</code> |
431
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
432
+ ```json
433
+ {
434
+ "loss": "MultipleNegativesRankingLoss",
435
+ "matryoshka_dims": [
436
+ 768,
437
+ 512,
438
+ 256,
439
+ 128,
440
+ 64
441
+ ],
442
+ "matryoshka_weights": [
443
+ 1,
444
+ 1,
445
+ 1,
446
+ 1,
447
+ 1
448
+ ],
449
+ "n_dims_per_step": -1
450
+ }
451
+ ```
452
+
453
+ ### Training Hyperparameters
454
+ #### Non-Default Hyperparameters
455
+
456
+ - `eval_strategy`: steps
457
+ - `per_device_train_batch_size`: 10
458
+ - `per_device_eval_batch_size`: 10
459
+ - `num_train_epochs`: 10
460
+ - `multi_dataset_batch_sampler`: round_robin
461
+
462
+ #### All Hyperparameters
463
+ <details><summary>Click to expand</summary>
464
+
465
+ - `overwrite_output_dir`: False
466
+ - `do_predict`: False
467
+ - `eval_strategy`: steps
468
+ - `prediction_loss_only`: True
469
+ - `per_device_train_batch_size`: 10
470
+ - `per_device_eval_batch_size`: 10
471
+ - `per_gpu_train_batch_size`: None
472
+ - `per_gpu_eval_batch_size`: None
473
+ - `gradient_accumulation_steps`: 1
474
+ - `eval_accumulation_steps`: None
475
+ - `torch_empty_cache_steps`: None
476
+ - `learning_rate`: 5e-05
477
+ - `weight_decay`: 0.0
478
+ - `adam_beta1`: 0.9
479
+ - `adam_beta2`: 0.999
480
+ - `adam_epsilon`: 1e-08
481
+ - `max_grad_norm`: 1
482
+ - `num_train_epochs`: 10
483
+ - `max_steps`: -1
484
+ - `lr_scheduler_type`: linear
485
+ - `lr_scheduler_kwargs`: {}
486
+ - `warmup_ratio`: 0.0
487
+ - `warmup_steps`: 0
488
+ - `log_level`: passive
489
+ - `log_level_replica`: warning
490
+ - `log_on_each_node`: True
491
+ - `logging_nan_inf_filter`: True
492
+ - `save_safetensors`: True
493
+ - `save_on_each_node`: False
494
+ - `save_only_model`: False
495
+ - `restore_callback_states_from_checkpoint`: False
496
+ - `no_cuda`: False
497
+ - `use_cpu`: False
498
+ - `use_mps_device`: False
499
+ - `seed`: 42
500
+ - `data_seed`: None
501
+ - `jit_mode_eval`: False
502
+ - `use_ipex`: False
503
+ - `bf16`: False
504
+ - `fp16`: False
505
+ - `fp16_opt_level`: O1
506
+ - `half_precision_backend`: auto
507
+ - `bf16_full_eval`: False
508
+ - `fp16_full_eval`: False
509
+ - `tf32`: None
510
+ - `local_rank`: 0
511
+ - `ddp_backend`: None
512
+ - `tpu_num_cores`: None
513
+ - `tpu_metrics_debug`: False
514
+ - `debug`: []
515
+ - `dataloader_drop_last`: False
516
+ - `dataloader_num_workers`: 0
517
+ - `dataloader_prefetch_factor`: None
518
+ - `past_index`: -1
519
+ - `disable_tqdm`: False
520
+ - `remove_unused_columns`: True
521
+ - `label_names`: None
522
+ - `load_best_model_at_end`: False
523
+ - `ignore_data_skip`: False
524
+ - `fsdp`: []
525
+ - `fsdp_min_num_params`: 0
526
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
527
+ - `fsdp_transformer_layer_cls_to_wrap`: None
528
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
529
+ - `deepspeed`: None
530
+ - `label_smoothing_factor`: 0.0
531
+ - `optim`: adamw_torch
532
+ - `optim_args`: None
533
+ - `adafactor`: False
534
+ - `group_by_length`: False
535
+ - `length_column_name`: length
536
+ - `ddp_find_unused_parameters`: None
537
+ - `ddp_bucket_cap_mb`: None
538
+ - `ddp_broadcast_buffers`: False
539
+ - `dataloader_pin_memory`: True
540
+ - `dataloader_persistent_workers`: False
541
+ - `skip_memory_metrics`: True
542
+ - `use_legacy_prediction_loop`: False
543
+ - `push_to_hub`: False
544
+ - `resume_from_checkpoint`: None
545
+ - `hub_model_id`: None
546
+ - `hub_strategy`: every_save
547
+ - `hub_private_repo`: None
548
+ - `hub_always_push`: False
549
+ - `gradient_checkpointing`: False
550
+ - `gradient_checkpointing_kwargs`: None
551
+ - `include_inputs_for_metrics`: False
552
+ - `include_for_metrics`: []
553
+ - `eval_do_concat_batches`: True
554
+ - `fp16_backend`: auto
555
+ - `push_to_hub_model_id`: None
556
+ - `push_to_hub_organization`: None
557
+ - `mp_parameters`:
558
+ - `auto_find_batch_size`: False
559
+ - `full_determinism`: False
560
+ - `torchdynamo`: None
561
+ - `ray_scope`: last
562
+ - `ddp_timeout`: 1800
563
+ - `torch_compile`: False
564
+ - `torch_compile_backend`: None
565
+ - `torch_compile_mode`: None
566
+ - `dispatch_batches`: None
567
+ - `split_batches`: None
568
+ - `include_tokens_per_second`: False
569
+ - `include_num_input_tokens_seen`: False
570
+ - `neftune_noise_alpha`: None
571
+ - `optim_target_modules`: None
572
+ - `batch_eval_metrics`: False
573
+ - `eval_on_start`: False
574
+ - `use_liger_kernel`: False
575
+ - `eval_use_gather_object`: False
576
+ - `average_tokens_across_devices`: False
577
+ - `prompts`: None
578
+ - `batch_sampler`: batch_sampler
579
+ - `multi_dataset_batch_sampler`: round_robin
580
+
581
+ </details>
582
+
583
+ ### Training Logs
584
+ | Epoch | Step | cosine_ndcg@10 |
585
+ |:-----:|:----:|:--------------:|
586
+ | 1.0 | 16 | 0.8968 |
587
+ | 2.0 | 32 | 0.8939 |
588
+ | 3.0 | 48 | 0.8786 |
589
+ | 3.125 | 50 | 0.8731 |
590
+ | 4.0 | 64 | 0.8702 |
591
+ | 5.0 | 80 | 0.8684 |
592
+ | 6.0 | 96 | 0.8713 |
593
+ | 6.25 | 100 | 0.8731 |
594
+ | 7.0 | 112 | 0.8885 |
595
+ | 8.0 | 128 | 0.8856 |
596
+ | 9.0 | 144 | 0.8885 |
597
+ | 9.375 | 150 | 0.8885 |
598
+ | 10.0 | 160 | 0.8885 |
599
+
600
+
601
+ ### Framework Versions
602
+ - Python: 3.11.11
603
+ - Sentence Transformers: 3.4.1
604
+ - Transformers: 4.48.2
605
+ - PyTorch: 2.5.1+cu124
606
+ - Accelerate: 1.3.0
607
+ - Datasets: 3.2.0
608
+ - Tokenizers: 0.21.0
609
+
610
+ ## Citation
611
+
612
+ ### BibTeX
613
+
614
+ #### Sentence Transformers
615
+ ```bibtex
616
+ @inproceedings{reimers-2019-sentence-bert,
617
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
618
+ author = "Reimers, Nils and Gurevych, Iryna",
619
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
620
+ month = "11",
621
+ year = "2019",
622
+ publisher = "Association for Computational Linguistics",
623
+ url = "https://arxiv.org/abs/1908.10084",
624
+ }
625
+ ```
626
+
627
+ #### MatryoshkaLoss
628
+ ```bibtex
629
+ @misc{kusupati2024matryoshka,
630
+ title={Matryoshka Representation Learning},
631
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
632
+ year={2024},
633
+ eprint={2205.13147},
634
+ archivePrefix={arXiv},
635
+ primaryClass={cs.LG}
636
+ }
637
+ ```
638
+
639
+ #### MultipleNegativesRankingLoss
640
+ ```bibtex
641
+ @misc{henderson2017efficient,
642
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
643
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
644
+ year={2017},
645
+ eprint={1705.00652},
646
+ archivePrefix={arXiv},
647
+ primaryClass={cs.CL}
648
+ }
649
+ ```
650
+
651
+ <!--
652
+ ## Glossary
653
+
654
+ *Clearly define terms in order to be accessible across audiences.*
655
+ -->
656
+
657
+ <!--
658
+ ## Model Card Authors
659
+
660
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
661
+ -->
662
+
663
+ <!--
664
+ ## Model Card Contact
665
+
666
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
667
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Snowflake/snowflake-arctic-embed-l",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 1024,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 4096,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 24,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.48.2",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.48.2",
5
+ "pytorch": "2.5.1+cu124"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: "
9
+ },
10
+ "default_prompt_name": null,
11
+ "similarity_fn_name": "cosine"
12
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ca2db76f0ebac4460470ae114fdd4e4aacad102606aa98e43b33254ae0e469a
3
+ size 1336413848
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "max_length": 512,
50
+ "model_max_length": 512,
51
+ "pad_to_multiple_of": null,
52
+ "pad_token": "[PAD]",
53
+ "pad_token_type_id": 0,
54
+ "padding_side": "right",
55
+ "sep_token": "[SEP]",
56
+ "stride": 0,
57
+ "strip_accents": null,
58
+ "tokenize_chinese_chars": true,
59
+ "tokenizer_class": "BertTokenizer",
60
+ "truncation_side": "right",
61
+ "truncation_strategy": "longest_first",
62
+ "unk_token": "[UNK]"
63
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff