Hoshino-Yumetsuki commited on
Commit
c8a114a
·
verified ·
1 Parent(s): 8521f52

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +1107 -0
README.md ADDED
@@ -0,0 +1,1107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - mteb
4
+ - llama-cpp
5
+ - gguf-my-repo
6
+ language:
7
+ - zh
8
+ license: cc-by-nc-4.0
9
+ library_name: sentence-transformers
10
+ base_model: TencentBAC/Conan-embedding-v1
11
+ model-index:
12
+ - name: conan-embedding
13
+ results:
14
+ - task:
15
+ type: STS
16
+ dataset:
17
+ name: MTEB AFQMC
18
+ type: C-MTEB/AFQMC
19
+ config: default
20
+ split: validation
21
+ revision: None
22
+ metrics:
23
+ - type: cos_sim_pearson
24
+ value: 56.613572467148856
25
+ - type: cos_sim_spearman
26
+ value: 60.66446211824284
27
+ - type: euclidean_pearson
28
+ value: 58.42080485872613
29
+ - type: euclidean_spearman
30
+ value: 59.82750030458164
31
+ - type: manhattan_pearson
32
+ value: 58.39885271199772
33
+ - type: manhattan_spearman
34
+ value: 59.817749720366734
35
+ - task:
36
+ type: STS
37
+ dataset:
38
+ name: MTEB ATEC
39
+ type: C-MTEB/ATEC
40
+ config: default
41
+ split: test
42
+ revision: None
43
+ metrics:
44
+ - type: cos_sim_pearson
45
+ value: 56.60530380552331
46
+ - type: cos_sim_spearman
47
+ value: 58.63822441736707
48
+ - type: euclidean_pearson
49
+ value: 62.18551665180664
50
+ - type: euclidean_spearman
51
+ value: 58.23168804495912
52
+ - type: manhattan_pearson
53
+ value: 62.17191480770053
54
+ - type: manhattan_spearman
55
+ value: 58.22556219601401
56
+ - task:
57
+ type: Classification
58
+ dataset:
59
+ name: MTEB AmazonReviewsClassification (zh)
60
+ type: mteb/amazon_reviews_multi
61
+ config: zh
62
+ split: test
63
+ revision: 1399c76144fd37290681b995c656ef9b2e06e26d
64
+ metrics:
65
+ - type: accuracy
66
+ value: 50.308
67
+ - type: f1
68
+ value: 46.927458607895126
69
+ - task:
70
+ type: STS
71
+ dataset:
72
+ name: MTEB BQ
73
+ type: C-MTEB/BQ
74
+ config: default
75
+ split: test
76
+ revision: None
77
+ metrics:
78
+ - type: cos_sim_pearson
79
+ value: 72.6472074172711
80
+ - type: cos_sim_spearman
81
+ value: 74.50748447236577
82
+ - type: euclidean_pearson
83
+ value: 72.51833296451854
84
+ - type: euclidean_spearman
85
+ value: 73.9898922606105
86
+ - type: manhattan_pearson
87
+ value: 72.50184948939338
88
+ - type: manhattan_spearman
89
+ value: 73.97797921509638
90
+ - task:
91
+ type: Clustering
92
+ dataset:
93
+ name: MTEB CLSClusteringP2P
94
+ type: C-MTEB/CLSClusteringP2P
95
+ config: default
96
+ split: test
97
+ revision: None
98
+ metrics:
99
+ - type: v_measure
100
+ value: 60.63545326048343
101
+ - task:
102
+ type: Clustering
103
+ dataset:
104
+ name: MTEB CLSClusteringS2S
105
+ type: C-MTEB/CLSClusteringS2S
106
+ config: default
107
+ split: test
108
+ revision: None
109
+ metrics:
110
+ - type: v_measure
111
+ value: 52.64834762325994
112
+ - task:
113
+ type: Reranking
114
+ dataset:
115
+ name: MTEB CMedQAv1
116
+ type: C-MTEB/CMedQAv1-reranking
117
+ config: default
118
+ split: test
119
+ revision: None
120
+ metrics:
121
+ - type: map
122
+ value: 91.38528814655234
123
+ - type: mrr
124
+ value: 93.35857142857144
125
+ - task:
126
+ type: Reranking
127
+ dataset:
128
+ name: MTEB CMedQAv2
129
+ type: C-MTEB/CMedQAv2-reranking
130
+ config: default
131
+ split: test
132
+ revision: None
133
+ metrics:
134
+ - type: map
135
+ value: 89.72084678877096
136
+ - type: mrr
137
+ value: 91.74380952380953
138
+ - task:
139
+ type: Retrieval
140
+ dataset:
141
+ name: MTEB CmedqaRetrieval
142
+ type: C-MTEB/CmedqaRetrieval
143
+ config: default
144
+ split: dev
145
+ revision: None
146
+ metrics:
147
+ - type: map_at_1
148
+ value: 26.987
149
+ - type: map_at_10
150
+ value: 40.675
151
+ - type: map_at_100
152
+ value: 42.495
153
+ - type: map_at_1000
154
+ value: 42.596000000000004
155
+ - type: map_at_3
156
+ value: 36.195
157
+ - type: map_at_5
158
+ value: 38.704
159
+ - type: mrr_at_1
160
+ value: 41.21
161
+ - type: mrr_at_10
162
+ value: 49.816
163
+ - type: mrr_at_100
164
+ value: 50.743
165
+ - type: mrr_at_1000
166
+ value: 50.77700000000001
167
+ - type: mrr_at_3
168
+ value: 47.312
169
+ - type: mrr_at_5
170
+ value: 48.699999999999996
171
+ - type: ndcg_at_1
172
+ value: 41.21
173
+ - type: ndcg_at_10
174
+ value: 47.606
175
+ - type: ndcg_at_100
176
+ value: 54.457
177
+ - type: ndcg_at_1000
178
+ value: 56.16100000000001
179
+ - type: ndcg_at_3
180
+ value: 42.108000000000004
181
+ - type: ndcg_at_5
182
+ value: 44.393
183
+ - type: precision_at_1
184
+ value: 41.21
185
+ - type: precision_at_10
186
+ value: 10.593
187
+ - type: precision_at_100
188
+ value: 1.609
189
+ - type: precision_at_1000
190
+ value: 0.183
191
+ - type: precision_at_3
192
+ value: 23.881
193
+ - type: precision_at_5
194
+ value: 17.339
195
+ - type: recall_at_1
196
+ value: 26.987
197
+ - type: recall_at_10
198
+ value: 58.875
199
+ - type: recall_at_100
200
+ value: 87.023
201
+ - type: recall_at_1000
202
+ value: 98.328
203
+ - type: recall_at_3
204
+ value: 42.265
205
+ - type: recall_at_5
206
+ value: 49.334
207
+ - task:
208
+ type: PairClassification
209
+ dataset:
210
+ name: MTEB Cmnli
211
+ type: C-MTEB/CMNLI
212
+ config: default
213
+ split: validation
214
+ revision: None
215
+ metrics:
216
+ - type: cos_sim_accuracy
217
+ value: 85.91701743836441
218
+ - type: cos_sim_ap
219
+ value: 92.53650618807644
220
+ - type: cos_sim_f1
221
+ value: 86.80265975431082
222
+ - type: cos_sim_precision
223
+ value: 83.79025239338556
224
+ - type: cos_sim_recall
225
+ value: 90.039747486556
226
+ - type: dot_accuracy
227
+ value: 77.17378232110643
228
+ - type: dot_ap
229
+ value: 85.40244368166546
230
+ - type: dot_f1
231
+ value: 79.03038001481951
232
+ - type: dot_precision
233
+ value: 72.20502901353966
234
+ - type: dot_recall
235
+ value: 87.2808043020809
236
+ - type: euclidean_accuracy
237
+ value: 84.65423932651834
238
+ - type: euclidean_ap
239
+ value: 91.47775530034588
240
+ - type: euclidean_f1
241
+ value: 85.64471499723298
242
+ - type: euclidean_precision
243
+ value: 81.31567885666246
244
+ - type: euclidean_recall
245
+ value: 90.46060322656068
246
+ - type: manhattan_accuracy
247
+ value: 84.58208057726999
248
+ - type: manhattan_ap
249
+ value: 91.46228709402014
250
+ - type: manhattan_f1
251
+ value: 85.6631626034444
252
+ - type: manhattan_precision
253
+ value: 82.10075026795283
254
+ - type: manhattan_recall
255
+ value: 89.5487491232172
256
+ - type: max_accuracy
257
+ value: 85.91701743836441
258
+ - type: max_ap
259
+ value: 92.53650618807644
260
+ - type: max_f1
261
+ value: 86.80265975431082
262
+ - task:
263
+ type: Retrieval
264
+ dataset:
265
+ name: MTEB CovidRetrieval
266
+ type: C-MTEB/CovidRetrieval
267
+ config: default
268
+ split: dev
269
+ revision: None
270
+ metrics:
271
+ - type: map_at_1
272
+ value: 83.693
273
+ - type: map_at_10
274
+ value: 90.098
275
+ - type: map_at_100
276
+ value: 90.145
277
+ - type: map_at_1000
278
+ value: 90.146
279
+ - type: map_at_3
280
+ value: 89.445
281
+ - type: map_at_5
282
+ value: 89.935
283
+ - type: mrr_at_1
284
+ value: 83.878
285
+ - type: mrr_at_10
286
+ value: 90.007
287
+ - type: mrr_at_100
288
+ value: 90.045
289
+ - type: mrr_at_1000
290
+ value: 90.046
291
+ - type: mrr_at_3
292
+ value: 89.34
293
+ - type: mrr_at_5
294
+ value: 89.835
295
+ - type: ndcg_at_1
296
+ value: 84.089
297
+ - type: ndcg_at_10
298
+ value: 92.351
299
+ - type: ndcg_at_100
300
+ value: 92.54599999999999
301
+ - type: ndcg_at_1000
302
+ value: 92.561
303
+ - type: ndcg_at_3
304
+ value: 91.15299999999999
305
+ - type: ndcg_at_5
306
+ value: 91.968
307
+ - type: precision_at_1
308
+ value: 84.089
309
+ - type: precision_at_10
310
+ value: 10.011000000000001
311
+ - type: precision_at_100
312
+ value: 1.009
313
+ - type: precision_at_1000
314
+ value: 0.101
315
+ - type: precision_at_3
316
+ value: 32.28
317
+ - type: precision_at_5
318
+ value: 19.789
319
+ - type: recall_at_1
320
+ value: 83.693
321
+ - type: recall_at_10
322
+ value: 99.05199999999999
323
+ - type: recall_at_100
324
+ value: 99.895
325
+ - type: recall_at_1000
326
+ value: 100
327
+ - type: recall_at_3
328
+ value: 95.917
329
+ - type: recall_at_5
330
+ value: 97.893
331
+ - task:
332
+ type: Retrieval
333
+ dataset:
334
+ name: MTEB DuRetrieval
335
+ type: C-MTEB/DuRetrieval
336
+ config: default
337
+ split: dev
338
+ revision: None
339
+ metrics:
340
+ - type: map_at_1
341
+ value: 26.924
342
+ - type: map_at_10
343
+ value: 81.392
344
+ - type: map_at_100
345
+ value: 84.209
346
+ - type: map_at_1000
347
+ value: 84.237
348
+ - type: map_at_3
349
+ value: 56.998000000000005
350
+ - type: map_at_5
351
+ value: 71.40100000000001
352
+ - type: mrr_at_1
353
+ value: 91.75
354
+ - type: mrr_at_10
355
+ value: 94.45
356
+ - type: mrr_at_100
357
+ value: 94.503
358
+ - type: mrr_at_1000
359
+ value: 94.505
360
+ - type: mrr_at_3
361
+ value: 94.258
362
+ - type: mrr_at_5
363
+ value: 94.381
364
+ - type: ndcg_at_1
365
+ value: 91.75
366
+ - type: ndcg_at_10
367
+ value: 88.53
368
+ - type: ndcg_at_100
369
+ value: 91.13900000000001
370
+ - type: ndcg_at_1000
371
+ value: 91.387
372
+ - type: ndcg_at_3
373
+ value: 87.925
374
+ - type: ndcg_at_5
375
+ value: 86.461
376
+ - type: precision_at_1
377
+ value: 91.75
378
+ - type: precision_at_10
379
+ value: 42.05
380
+ - type: precision_at_100
381
+ value: 4.827
382
+ - type: precision_at_1000
383
+ value: 0.48900000000000005
384
+ - type: precision_at_3
385
+ value: 78.55
386
+ - type: precision_at_5
387
+ value: 65.82000000000001
388
+ - type: recall_at_1
389
+ value: 26.924
390
+ - type: recall_at_10
391
+ value: 89.338
392
+ - type: recall_at_100
393
+ value: 97.856
394
+ - type: recall_at_1000
395
+ value: 99.11
396
+ - type: recall_at_3
397
+ value: 59.202999999999996
398
+ - type: recall_at_5
399
+ value: 75.642
400
+ - task:
401
+ type: Retrieval
402
+ dataset:
403
+ name: MTEB EcomRetrieval
404
+ type: C-MTEB/EcomRetrieval
405
+ config: default
406
+ split: dev
407
+ revision: None
408
+ metrics:
409
+ - type: map_at_1
410
+ value: 54.800000000000004
411
+ - type: map_at_10
412
+ value: 65.613
413
+ - type: map_at_100
414
+ value: 66.185
415
+ - type: map_at_1000
416
+ value: 66.191
417
+ - type: map_at_3
418
+ value: 62.8
419
+ - type: map_at_5
420
+ value: 64.535
421
+ - type: mrr_at_1
422
+ value: 54.800000000000004
423
+ - type: mrr_at_10
424
+ value: 65.613
425
+ - type: mrr_at_100
426
+ value: 66.185
427
+ - type: mrr_at_1000
428
+ value: 66.191
429
+ - type: mrr_at_3
430
+ value: 62.8
431
+ - type: mrr_at_5
432
+ value: 64.535
433
+ - type: ndcg_at_1
434
+ value: 54.800000000000004
435
+ - type: ndcg_at_10
436
+ value: 70.991
437
+ - type: ndcg_at_100
438
+ value: 73.434
439
+ - type: ndcg_at_1000
440
+ value: 73.587
441
+ - type: ndcg_at_3
442
+ value: 65.324
443
+ - type: ndcg_at_5
444
+ value: 68.431
445
+ - type: precision_at_1
446
+ value: 54.800000000000004
447
+ - type: precision_at_10
448
+ value: 8.790000000000001
449
+ - type: precision_at_100
450
+ value: 0.9860000000000001
451
+ - type: precision_at_1000
452
+ value: 0.1
453
+ - type: precision_at_3
454
+ value: 24.2
455
+ - type: precision_at_5
456
+ value: 16.02
457
+ - type: recall_at_1
458
+ value: 54.800000000000004
459
+ - type: recall_at_10
460
+ value: 87.9
461
+ - type: recall_at_100
462
+ value: 98.6
463
+ - type: recall_at_1000
464
+ value: 99.8
465
+ - type: recall_at_3
466
+ value: 72.6
467
+ - type: recall_at_5
468
+ value: 80.10000000000001
469
+ - task:
470
+ type: Classification
471
+ dataset:
472
+ name: MTEB IFlyTek
473
+ type: C-MTEB/IFlyTek-classification
474
+ config: default
475
+ split: validation
476
+ revision: None
477
+ metrics:
478
+ - type: accuracy
479
+ value: 51.94305502116199
480
+ - type: f1
481
+ value: 39.82197338426721
482
+ - task:
483
+ type: Classification
484
+ dataset:
485
+ name: MTEB JDReview
486
+ type: C-MTEB/JDReview-classification
487
+ config: default
488
+ split: test
489
+ revision: None
490
+ metrics:
491
+ - type: accuracy
492
+ value: 90.31894934333957
493
+ - type: ap
494
+ value: 63.89821836499594
495
+ - type: f1
496
+ value: 85.93687177603624
497
+ - task:
498
+ type: STS
499
+ dataset:
500
+ name: MTEB LCQMC
501
+ type: C-MTEB/LCQMC
502
+ config: default
503
+ split: test
504
+ revision: None
505
+ metrics:
506
+ - type: cos_sim_pearson
507
+ value: 73.18906216730208
508
+ - type: cos_sim_spearman
509
+ value: 79.44570226735877
510
+ - type: euclidean_pearson
511
+ value: 78.8105072242798
512
+ - type: euclidean_spearman
513
+ value: 79.15605680863212
514
+ - type: manhattan_pearson
515
+ value: 78.80576507484064
516
+ - type: manhattan_spearman
517
+ value: 79.14625534068364
518
+ - task:
519
+ type: Reranking
520
+ dataset:
521
+ name: MTEB MMarcoReranking
522
+ type: C-MTEB/Mmarco-reranking
523
+ config: default
524
+ split: dev
525
+ revision: None
526
+ metrics:
527
+ - type: map
528
+ value: 41.58107192600853
529
+ - type: mrr
530
+ value: 41.37063492063492
531
+ - task:
532
+ type: Retrieval
533
+ dataset:
534
+ name: MTEB MMarcoRetrieval
535
+ type: C-MTEB/MMarcoRetrieval
536
+ config: default
537
+ split: dev
538
+ revision: None
539
+ metrics:
540
+ - type: map_at_1
541
+ value: 68.33
542
+ - type: map_at_10
543
+ value: 78.261
544
+ - type: map_at_100
545
+ value: 78.522
546
+ - type: map_at_1000
547
+ value: 78.527
548
+ - type: map_at_3
549
+ value: 76.236
550
+ - type: map_at_5
551
+ value: 77.557
552
+ - type: mrr_at_1
553
+ value: 70.602
554
+ - type: mrr_at_10
555
+ value: 78.779
556
+ - type: mrr_at_100
557
+ value: 79.00500000000001
558
+ - type: mrr_at_1000
559
+ value: 79.01
560
+ - type: mrr_at_3
561
+ value: 77.037
562
+ - type: mrr_at_5
563
+ value: 78.157
564
+ - type: ndcg_at_1
565
+ value: 70.602
566
+ - type: ndcg_at_10
567
+ value: 82.254
568
+ - type: ndcg_at_100
569
+ value: 83.319
570
+ - type: ndcg_at_1000
571
+ value: 83.449
572
+ - type: ndcg_at_3
573
+ value: 78.46
574
+ - type: ndcg_at_5
575
+ value: 80.679
576
+ - type: precision_at_1
577
+ value: 70.602
578
+ - type: precision_at_10
579
+ value: 9.989
580
+ - type: precision_at_100
581
+ value: 1.05
582
+ - type: precision_at_1000
583
+ value: 0.106
584
+ - type: precision_at_3
585
+ value: 29.598999999999997
586
+ - type: precision_at_5
587
+ value: 18.948
588
+ - type: recall_at_1
589
+ value: 68.33
590
+ - type: recall_at_10
591
+ value: 94.00800000000001
592
+ - type: recall_at_100
593
+ value: 98.589
594
+ - type: recall_at_1000
595
+ value: 99.60799999999999
596
+ - type: recall_at_3
597
+ value: 84.057
598
+ - type: recall_at_5
599
+ value: 89.32900000000001
600
+ - task:
601
+ type: Classification
602
+ dataset:
603
+ name: MTEB MassiveIntentClassification (zh-CN)
604
+ type: mteb/amazon_massive_intent
605
+ config: zh-CN
606
+ split: test
607
+ revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
608
+ metrics:
609
+ - type: accuracy
610
+ value: 78.13718897108272
611
+ - type: f1
612
+ value: 74.07613180855328
613
+ - task:
614
+ type: Classification
615
+ dataset:
616
+ name: MTEB MassiveScenarioClassification (zh-CN)
617
+ type: mteb/amazon_massive_scenario
618
+ config: zh-CN
619
+ split: test
620
+ revision: 7d571f92784cd94a019292a1f45445077d0ef634
621
+ metrics:
622
+ - type: accuracy
623
+ value: 86.20040349697376
624
+ - type: f1
625
+ value: 85.05282136519973
626
+ - task:
627
+ type: Retrieval
628
+ dataset:
629
+ name: MTEB MedicalRetrieval
630
+ type: C-MTEB/MedicalRetrieval
631
+ config: default
632
+ split: dev
633
+ revision: None
634
+ metrics:
635
+ - type: map_at_1
636
+ value: 56.8
637
+ - type: map_at_10
638
+ value: 64.199
639
+ - type: map_at_100
640
+ value: 64.89
641
+ - type: map_at_1000
642
+ value: 64.917
643
+ - type: map_at_3
644
+ value: 62.383
645
+ - type: map_at_5
646
+ value: 63.378
647
+ - type: mrr_at_1
648
+ value: 56.8
649
+ - type: mrr_at_10
650
+ value: 64.199
651
+ - type: mrr_at_100
652
+ value: 64.89
653
+ - type: mrr_at_1000
654
+ value: 64.917
655
+ - type: mrr_at_3
656
+ value: 62.383
657
+ - type: mrr_at_5
658
+ value: 63.378
659
+ - type: ndcg_at_1
660
+ value: 56.8
661
+ - type: ndcg_at_10
662
+ value: 67.944
663
+ - type: ndcg_at_100
664
+ value: 71.286
665
+ - type: ndcg_at_1000
666
+ value: 71.879
667
+ - type: ndcg_at_3
668
+ value: 64.163
669
+ - type: ndcg_at_5
670
+ value: 65.96600000000001
671
+ - type: precision_at_1
672
+ value: 56.8
673
+ - type: precision_at_10
674
+ value: 7.9799999999999995
675
+ - type: precision_at_100
676
+ value: 0.954
677
+ - type: precision_at_1000
678
+ value: 0.1
679
+ - type: precision_at_3
680
+ value: 23.1
681
+ - type: precision_at_5
682
+ value: 14.74
683
+ - type: recall_at_1
684
+ value: 56.8
685
+ - type: recall_at_10
686
+ value: 79.80000000000001
687
+ - type: recall_at_100
688
+ value: 95.39999999999999
689
+ - type: recall_at_1000
690
+ value: 99.8
691
+ - type: recall_at_3
692
+ value: 69.3
693
+ - type: recall_at_5
694
+ value: 73.7
695
+ - task:
696
+ type: Classification
697
+ dataset:
698
+ name: MTEB MultilingualSentiment
699
+ type: C-MTEB/MultilingualSentiment-classification
700
+ config: default
701
+ split: validation
702
+ revision: None
703
+ metrics:
704
+ - type: accuracy
705
+ value: 78.57666666666667
706
+ - type: f1
707
+ value: 78.23373528202681
708
+ - task:
709
+ type: PairClassification
710
+ dataset:
711
+ name: MTEB Ocnli
712
+ type: C-MTEB/OCNLI
713
+ config: default
714
+ split: validation
715
+ revision: None
716
+ metrics:
717
+ - type: cos_sim_accuracy
718
+ value: 85.43584190579317
719
+ - type: cos_sim_ap
720
+ value: 90.76665640338129
721
+ - type: cos_sim_f1
722
+ value: 86.5021770682148
723
+ - type: cos_sim_precision
724
+ value: 79.82142857142858
725
+ - type: cos_sim_recall
726
+ value: 94.40337909186906
727
+ - type: dot_accuracy
728
+ value: 78.66811044937737
729
+ - type: dot_ap
730
+ value: 85.84084363880804
731
+ - type: dot_f1
732
+ value: 80.10075566750629
733
+ - type: dot_precision
734
+ value: 76.58959537572254
735
+ - type: dot_recall
736
+ value: 83.9493136219641
737
+ - type: euclidean_accuracy
738
+ value: 84.46128857606931
739
+ - type: euclidean_ap
740
+ value: 88.62351100230491
741
+ - type: euclidean_f1
742
+ value: 85.7709469509172
743
+ - type: euclidean_precision
744
+ value: 80.8411214953271
745
+ - type: euclidean_recall
746
+ value: 91.34107708553326
747
+ - type: manhattan_accuracy
748
+ value: 84.51543042772063
749
+ - type: manhattan_ap
750
+ value: 88.53975607870393
751
+ - type: manhattan_f1
752
+ value: 85.75697211155378
753
+ - type: manhattan_precision
754
+ value: 81.14985862393968
755
+ - type: manhattan_recall
756
+ value: 90.91869060190075
757
+ - type: max_accuracy
758
+ value: 85.43584190579317
759
+ - type: max_ap
760
+ value: 90.76665640338129
761
+ - type: max_f1
762
+ value: 86.5021770682148
763
+ - task:
764
+ type: Classification
765
+ dataset:
766
+ name: MTEB OnlineShopping
767
+ type: C-MTEB/OnlineShopping-classification
768
+ config: default
769
+ split: test
770
+ revision: None
771
+ metrics:
772
+ - type: accuracy
773
+ value: 95.06999999999998
774
+ - type: ap
775
+ value: 93.45104559324996
776
+ - type: f1
777
+ value: 95.06036329426092
778
+ - task:
779
+ type: STS
780
+ dataset:
781
+ name: MTEB PAWSX
782
+ type: C-MTEB/PAWSX
783
+ config: default
784
+ split: test
785
+ revision: None
786
+ metrics:
787
+ - type: cos_sim_pearson
788
+ value: 40.01998290519605
789
+ - type: cos_sim_spearman
790
+ value: 46.5989769986853
791
+ - type: euclidean_pearson
792
+ value: 45.37905883182924
793
+ - type: euclidean_spearman
794
+ value: 46.22213849806378
795
+ - type: manhattan_pearson
796
+ value: 45.40925124776211
797
+ - type: manhattan_spearman
798
+ value: 46.250705124226386
799
+ - task:
800
+ type: STS
801
+ dataset:
802
+ name: MTEB QBQTC
803
+ type: C-MTEB/QBQTC
804
+ config: default
805
+ split: test
806
+ revision: None
807
+ metrics:
808
+ - type: cos_sim_pearson
809
+ value: 42.719516197112526
810
+ - type: cos_sim_spearman
811
+ value: 44.57507789581106
812
+ - type: euclidean_pearson
813
+ value: 35.73062264160721
814
+ - type: euclidean_spearman
815
+ value: 40.473523909913695
816
+ - type: manhattan_pearson
817
+ value: 35.69868964086357
818
+ - type: manhattan_spearman
819
+ value: 40.46349925372903
820
+ - task:
821
+ type: STS
822
+ dataset:
823
+ name: MTEB STS22 (zh)
824
+ type: mteb/sts22-crosslingual-sts
825
+ config: zh
826
+ split: test
827
+ revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
828
+ metrics:
829
+ - type: cos_sim_pearson
830
+ value: 62.340118285801104
831
+ - type: cos_sim_spearman
832
+ value: 67.72781908620632
833
+ - type: euclidean_pearson
834
+ value: 63.161965746091596
835
+ - type: euclidean_spearman
836
+ value: 67.36825684340769
837
+ - type: manhattan_pearson
838
+ value: 63.089863788261425
839
+ - type: manhattan_spearman
840
+ value: 67.40868898995384
841
+ - task:
842
+ type: STS
843
+ dataset:
844
+ name: MTEB STSB
845
+ type: C-MTEB/STSB
846
+ config: default
847
+ split: test
848
+ revision: None
849
+ metrics:
850
+ - type: cos_sim_pearson
851
+ value: 79.1646360962365
852
+ - type: cos_sim_spearman
853
+ value: 81.24426700767087
854
+ - type: euclidean_pearson
855
+ value: 79.43826409936123
856
+ - type: euclidean_spearman
857
+ value: 79.71787965300125
858
+ - type: manhattan_pearson
859
+ value: 79.43377784961737
860
+ - type: manhattan_spearman
861
+ value: 79.69348376886967
862
+ - task:
863
+ type: Reranking
864
+ dataset:
865
+ name: MTEB T2Reranking
866
+ type: C-MTEB/T2Reranking
867
+ config: default
868
+ split: dev
869
+ revision: None
870
+ metrics:
871
+ - type: map
872
+ value: 68.35595092507496
873
+ - type: mrr
874
+ value: 79.00244892585788
875
+ - task:
876
+ type: Retrieval
877
+ dataset:
878
+ name: MTEB T2Retrieval
879
+ type: C-MTEB/T2Retrieval
880
+ config: default
881
+ split: dev
882
+ revision: None
883
+ metrics:
884
+ - type: map_at_1
885
+ value: 26.588
886
+ - type: map_at_10
887
+ value: 75.327
888
+ - type: map_at_100
889
+ value: 79.095
890
+ - type: map_at_1000
891
+ value: 79.163
892
+ - type: map_at_3
893
+ value: 52.637
894
+ - type: map_at_5
895
+ value: 64.802
896
+ - type: mrr_at_1
897
+ value: 88.103
898
+ - type: mrr_at_10
899
+ value: 91.29899999999999
900
+ - type: mrr_at_100
901
+ value: 91.408
902
+ - type: mrr_at_1000
903
+ value: 91.411
904
+ - type: mrr_at_3
905
+ value: 90.801
906
+ - type: mrr_at_5
907
+ value: 91.12700000000001
908
+ - type: ndcg_at_1
909
+ value: 88.103
910
+ - type: ndcg_at_10
911
+ value: 83.314
912
+ - type: ndcg_at_100
913
+ value: 87.201
914
+ - type: ndcg_at_1000
915
+ value: 87.83999999999999
916
+ - type: ndcg_at_3
917
+ value: 84.408
918
+ - type: ndcg_at_5
919
+ value: 83.078
920
+ - type: precision_at_1
921
+ value: 88.103
922
+ - type: precision_at_10
923
+ value: 41.638999999999996
924
+ - type: precision_at_100
925
+ value: 5.006
926
+ - type: precision_at_1000
927
+ value: 0.516
928
+ - type: precision_at_3
929
+ value: 73.942
930
+ - type: precision_at_5
931
+ value: 62.056
932
+ - type: recall_at_1
933
+ value: 26.588
934
+ - type: recall_at_10
935
+ value: 82.819
936
+ - type: recall_at_100
937
+ value: 95.334
938
+ - type: recall_at_1000
939
+ value: 98.51299999999999
940
+ - type: recall_at_3
941
+ value: 54.74
942
+ - type: recall_at_5
943
+ value: 68.864
944
+ - task:
945
+ type: Classification
946
+ dataset:
947
+ name: MTEB TNews
948
+ type: C-MTEB/TNews-classification
949
+ config: default
950
+ split: validation
951
+ revision: None
952
+ metrics:
953
+ - type: accuracy
954
+ value: 55.029
955
+ - type: f1
956
+ value: 53.043617905026764
957
+ - task:
958
+ type: Clustering
959
+ dataset:
960
+ name: MTEB ThuNewsClusteringP2P
961
+ type: C-MTEB/ThuNewsClusteringP2P
962
+ config: default
963
+ split: test
964
+ revision: None
965
+ metrics:
966
+ - type: v_measure
967
+ value: 77.83675116835911
968
+ - task:
969
+ type: Clustering
970
+ dataset:
971
+ name: MTEB ThuNewsClusteringS2S
972
+ type: C-MTEB/ThuNewsClusteringS2S
973
+ config: default
974
+ split: test
975
+ revision: None
976
+ metrics:
977
+ - type: v_measure
978
+ value: 74.19701455865277
979
+ - task:
980
+ type: Retrieval
981
+ dataset:
982
+ name: MTEB VideoRetrieval
983
+ type: C-MTEB/VideoRetrieval
984
+ config: default
985
+ split: dev
986
+ revision: None
987
+ metrics:
988
+ - type: map_at_1
989
+ value: 64.7
990
+ - type: map_at_10
991
+ value: 75.593
992
+ - type: map_at_100
993
+ value: 75.863
994
+ - type: map_at_1000
995
+ value: 75.863
996
+ - type: map_at_3
997
+ value: 73.63300000000001
998
+ - type: map_at_5
999
+ value: 74.923
1000
+ - type: mrr_at_1
1001
+ value: 64.7
1002
+ - type: mrr_at_10
1003
+ value: 75.593
1004
+ - type: mrr_at_100
1005
+ value: 75.863
1006
+ - type: mrr_at_1000
1007
+ value: 75.863
1008
+ - type: mrr_at_3
1009
+ value: 73.63300000000001
1010
+ - type: mrr_at_5
1011
+ value: 74.923
1012
+ - type: ndcg_at_1
1013
+ value: 64.7
1014
+ - type: ndcg_at_10
1015
+ value: 80.399
1016
+ - type: ndcg_at_100
1017
+ value: 81.517
1018
+ - type: ndcg_at_1000
1019
+ value: 81.517
1020
+ - type: ndcg_at_3
1021
+ value: 76.504
1022
+ - type: ndcg_at_5
1023
+ value: 78.79899999999999
1024
+ - type: precision_at_1
1025
+ value: 64.7
1026
+ - type: precision_at_10
1027
+ value: 9.520000000000001
1028
+ - type: precision_at_100
1029
+ value: 1
1030
+ - type: precision_at_1000
1031
+ value: 0.1
1032
+ - type: precision_at_3
1033
+ value: 28.266999999999996
1034
+ - type: precision_at_5
1035
+ value: 18.060000000000002
1036
+ - type: recall_at_1
1037
+ value: 64.7
1038
+ - type: recall_at_10
1039
+ value: 95.19999999999999
1040
+ - type: recall_at_100
1041
+ value: 100
1042
+ - type: recall_at_1000
1043
+ value: 100
1044
+ - type: recall_at_3
1045
+ value: 84.8
1046
+ - type: recall_at_5
1047
+ value: 90.3
1048
+ - task:
1049
+ type: Classification
1050
+ dataset:
1051
+ name: MTEB Waimai
1052
+ type: C-MTEB/waimai-classification
1053
+ config: default
1054
+ split: test
1055
+ revision: None
1056
+ metrics:
1057
+ - type: accuracy
1058
+ value: 89.69999999999999
1059
+ - type: ap
1060
+ value: 75.91371640164184
1061
+ - type: f1
1062
+ value: 88.34067777698694
1063
+ ---
1064
+
1065
+ # Hoshino-Yumetsuki/Conan-embedding-v1-Q4_K_M-GGUF
1066
+ This model was converted to GGUF format from [`TencentBAC/Conan-embedding-v1`](https://huggingface.co/TencentBAC/Conan-embedding-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
1067
+ Refer to the [original model card](https://huggingface.co/TencentBAC/Conan-embedding-v1) for more details on the model.
1068
+
1069
+ ## Use with llama.cpp
1070
+ Install llama.cpp through brew (works on Mac and Linux)
1071
+
1072
+ ```bash
1073
+ brew install llama.cpp
1074
+
1075
+ ```
1076
+ Invoke the llama.cpp server or the CLI.
1077
+
1078
+ ### CLI:
1079
+ ```bash
1080
+ llama-cli --hf-repo Hoshino-Yumetsuki/Conan-embedding-v1-Q4_K_M-GGUF --hf-file conan-embedding-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
1081
+ ```
1082
+
1083
+ ### Server:
1084
+ ```bash
1085
+ llama-server --hf-repo Hoshino-Yumetsuki/Conan-embedding-v1-Q4_K_M-GGUF --hf-file conan-embedding-v1-q4_k_m.gguf -c 2048
1086
+ ```
1087
+
1088
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
1089
+
1090
+ Step 1: Clone llama.cpp from GitHub.
1091
+ ```
1092
+ git clone https://github.com/ggerganov/llama.cpp
1093
+ ```
1094
+
1095
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
1096
+ ```
1097
+ cd llama.cpp && LLAMA_CURL=1 make
1098
+ ```
1099
+
1100
+ Step 3: Run inference through the main binary.
1101
+ ```
1102
+ ./llama-cli --hf-repo Hoshino-Yumetsuki/Conan-embedding-v1-Q4_K_M-GGUF --hf-file conan-embedding-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
1103
+ ```
1104
+ or
1105
+ ```
1106
+ ./llama-server --hf-repo Hoshino-Yumetsuki/Conan-embedding-v1-Q4_K_M-GGUF --hf-file conan-embedding-v1-q4_k_m.gguf -c 2048
1107
+ ```