Andreas Chari commited on
Commit
22f0c87
·
1 Parent(s): 1ff2f1a

First model version

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.json filter=lfs diff=lfs merge=lfs -text
1_Pooling/config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13e69897522ee8255104483ed9f219465d1be3936654a54a318758738052789e
3
+ size 297
README.md CHANGED
@@ -1,3 +1,144 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets: []
3
+ language: []
4
+ library_name: sentence-transformers
5
+ pipeline_tag: sentence-similarity
6
+ tags:
7
+ - sentence-transformers
8
+ - sentence-similarity
9
+ - feature-extraction
10
+ widget: []
11
+ ---
12
+
13
+ # SentenceTransformer
14
+
15
+ This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
16
+
17
+ ## Model Details
18
+
19
+ ### Model Description
20
+ - **Model Type:** Sentence Transformer
21
+ <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
22
+ - **Maximum Sequence Length:** 8192 tokens
23
+ - **Output Dimensionality:** 1024 tokens
24
+ - **Similarity Function:** Cosine Similarity
25
+ <!-- - **Training Dataset:** Unknown -->
26
+ <!-- - **Language:** Unknown -->
27
+ <!-- - **License:** Unknown -->
28
+
29
+ ### Model Sources
30
+
31
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
32
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
33
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
34
+
35
+ ### Full Model Architecture
36
+
37
+ ```
38
+ SentenceTransformer(
39
+ (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
40
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
41
+ (2): Normalize()
42
+ )
43
+ ```
44
+
45
+ ## Usage
46
+
47
+ ### Direct Usage (Sentence Transformers)
48
+
49
+ First install the Sentence Transformers library:
50
+
51
+ ```bash
52
+ pip install -U sentence-transformers
53
+ ```
54
+
55
+ Then you can load this model and run inference.
56
+ ```python
57
+ from sentence_transformers import SentenceTransformer
58
+
59
+ # Download from the 🤗 Hub
60
+ model = SentenceTransformer("sentence_transformers_model_id")
61
+ # Run inference
62
+ sentences = [
63
+ 'The weather is lovely today.',
64
+ "It's so sunny outside!",
65
+ 'He drove to the stadium.',
66
+ ]
67
+ embeddings = model.encode(sentences)
68
+ print(embeddings.shape)
69
+ # [3, 1024]
70
+
71
+ # Get the similarity scores for the embeddings
72
+ similarities = model.similarity(embeddings, embeddings)
73
+ print(similarities.shape)
74
+ # [3, 3]
75
+ ```
76
+
77
+ <!--
78
+ ### Direct Usage (Transformers)
79
+
80
+ <details><summary>Click to see the direct usage in Transformers</summary>
81
+
82
+ </details>
83
+ -->
84
+
85
+ <!--
86
+ ### Downstream Usage (Sentence Transformers)
87
+
88
+ You can finetune this model on your own dataset.
89
+
90
+ <details><summary>Click to expand</summary>
91
+
92
+ </details>
93
+ -->
94
+
95
+ <!--
96
+ ### Out-of-Scope Use
97
+
98
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
99
+ -->
100
+
101
+ <!--
102
+ ## Bias, Risks and Limitations
103
+
104
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
105
+ -->
106
+
107
+ <!--
108
+ ### Recommendations
109
+
110
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
111
+ -->
112
+
113
+ ## Training Details
114
+
115
+ ### Framework Versions
116
+ - Python: 3.10.14
117
+ - Sentence Transformers: 3.0.1
118
+ - Transformers: 4.41.2
119
+ - PyTorch: 2.4.0.post301
120
+ - Accelerate: 0.32.1
121
+ - Datasets: 2.19.1
122
+ - Tokenizers: 0.19.1
123
+
124
+ ## Citation
125
+
126
+ ### BibTeX
127
+
128
+ <!--
129
+ ## Glossary
130
+
131
+ *Clearly define terms in order to be accessible across audiences.*
132
+ -->
133
+
134
+ <!--
135
+ ## Model Card Authors
136
+
137
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
138
+ -->
139
+
140
+ <!--
141
+ ## Model Card Contact
142
+
143
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
144
+ -->
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4da779080a64eb4e4024e184a7262b19c40548edd847fd4a17ba3ac39197cc27
3
+ size 732
config_sentence_transformers.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7691d7564606e502afbd7d54c193c6d3860361c0177b877560c85980bbbd768
3
+ size 203
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aacbaf97f3df4856790098f8d33121bddf6377c8c8858fc17ad52b5c032b184d
3
+ size 2271064456
modules.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84e40c8e006c9b1d6c122e02cba9b02458120b5fb0c87b746c41e0207cf642cf
3
+ size 349
sentence_bert_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb9b44b13c0f52a3b3685c3b1cbdea1ba8b04bea123b98f61610048940776eb1
3
+ size 54
sentencepiece.bpe.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfc8146abe2a0488e9e2a0c56de7952f7c11ab059eca145a0a727afce0db2865
3
+ size 5069051
special_tokens_map.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c785abebea9ae3257b61681b4e6fd8365ceafde980c21970d001e834cf10835
3
+ size 964
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b74659c780d49afad7a7b9799868f75cbd3014fb6c34956e85a793028d38094a
3
+ size 17098251
tokenizer_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e4c1cc848840aeccdd763458c18dd525eb0f795c992e00ebe9c28554e7db2d4
3
+ size 1173
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8daef365df9b896bb7a20411fedfb7dc5378207fd7483443432ebf31ed04ce26
3
+ size 5368