zihoo commited on
Commit
8edc72f
·
verified ·
1 Parent(s): f4c1d7a

Add new SentenceTransformer model.

Browse files
Files changed (2) hide show
  1. README.md +30 -29
  2. model.safetensors +1 -1
README.md CHANGED
@@ -12,31 +12,31 @@ tags:
12
  - dataset_size:200
13
  - loss:SoftmaxLoss
14
  widget:
15
- - source_sentence: ' I worry that AI will eventually replace my job,'
16
  sentences:
17
- - ' AI might affect the stability of job markets globally,'
18
- - ' AI can mimic human facial expressions,'
19
- - ' AIs flexibility in updating its models is remarkable,'
20
- - source_sentence: ' AIs efficiency in data mining is impressive,'
21
  sentences:
22
- - ' AI could reduce the need for human intervention in many fields,'
23
- - ' AI will possibly replace customer service jobs,'
24
- - ' AI systems modify their processes based on feedback,'
25
- - source_sentence: ' AI can adjust to user preferences over time,'
26
  sentences:
27
- - ' AI can simulate human emotions in interactions,'
28
- - ' AIs precision in data analysis is highly reliable,'
29
- - ' The rapid growth of AI makes me uneasy,'
30
- - source_sentence: ' AI tools optimize my workflow, making tasks simpler,'
31
  sentences:
32
- - ' AI tools aid significantly in project execution,'
33
- - ' AI can carry on meaningful conversations over the phone,'
34
- - ' AI helps me make better decisions through data analysis,'
35
- - source_sentence: ' AI effectively enhances my problem-solving strategies,'
36
  sentences:
37
- - ' AI consistently provides reliable recommendations,'
38
- - ' AI tools support me in delivering better results,'
39
- - ' AI optimizes content strategy through data analysis,'
40
  ---
41
 
42
  # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
@@ -89,9 +89,9 @@ from sentence_transformers import SentenceTransformer
89
  model = SentenceTransformer("zihoo/all-MiniLM-L6-v2-AINLI")
90
  # Run inference
91
  sentences = [
92
- ' AI effectively enhances my problem-solving strategies,',
93
- ' AI consistently provides reliable recommendations,',
94
- ' AI optimizes content strategy through data analysis,',
95
  ]
96
  embeddings = model.encode(sentences)
97
  print(embeddings.shape)
@@ -154,11 +154,11 @@ You can finetune this model on your own dataset.
154
  | type | string | string | int |
155
  | details | <ul><li>min: 8 tokens</li><li>mean: 11.91 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 11.91 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>0: ~9.00%</li><li>1: ~69.00%</li><li>2: ~22.00%</li></ul> |
156
  * Samples:
157
- | sentence_0 | sentence_1 | label |
158
- |:--------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------|
159
- | <code> AI systems autonomously integrate user feedback for improvements</code> | <code> AI systems autonomously integrate user feedback for improvements</code> | <code>0</code> |
160
- | <code> AI can exhibit empathy in certain contexts,</code> | <code> The quality of AI in customer feedback analysis is notable,</code> | <code>2</code> |
161
- | <code> I feel stressed about using AI in professional settings,</code> | <code> I feel tense dealing with advanced AI technologies,</code> | <code>0</code> |
162
  * Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
163
 
164
  ### Training Hyperparameters
@@ -166,6 +166,7 @@ You can finetune this model on your own dataset.
166
 
167
  - `per_device_train_batch_size`: 64
168
  - `per_device_eval_batch_size`: 64
 
169
  - `multi_dataset_batch_sampler`: round_robin
170
 
171
  #### All Hyperparameters
@@ -187,7 +188,7 @@ You can finetune this model on your own dataset.
187
  - `adam_beta2`: 0.999
188
  - `adam_epsilon`: 1e-08
189
  - `max_grad_norm`: 1
190
- - `num_train_epochs`: 3
191
  - `max_steps`: -1
192
  - `lr_scheduler_type`: linear
193
  - `lr_scheduler_kwargs`: {}
 
12
  - dataset_size:200
13
  - loss:SoftmaxLoss
14
  widget:
15
+ - source_sentence: ' AIs conversational abilities are becoming more human-like,'
16
  sentences:
17
+ - ' AI-generated presentations are often high-quality,'
18
+ - ' AIs conversational prowess rivals that of humans,'
19
+ - ' I feel intimidated by the complexity of AI technologies,'
20
+ - source_sentence: ' AI-generated presentations are often high-quality,'
21
  sentences:
22
+ - ' AIs progress poses risks to current job structures,'
23
+ - ' I get nervous when I have to use AI for complex tasks,'
24
+ - ' AI could lead to significant shifts in employment trends,'
25
+ - source_sentence: ' AIs data cleaning quality is impressive,'
26
  sentences:
27
+ - ' I am anxious about relying on AI for critical decisions,'
28
+ - ' AI could make many existing job roles redundant,'
29
+ - ' AI could greatly impact the future job landscape,'
30
+ - source_sentence: ' Relying on AI for critical thinking tasks makes me nervous,'
31
  sentences:
32
+ - ' The rise of AI is creating job displacement fears,'
33
+ - ' The rapid growth of AI makes me uneasy,'
34
+ - ' I hesitate to use AI-powered tools without supervision,'
35
+ - source_sentence: ' AIs efficiency in data mining is impressive,'
36
  sentences:
37
+ - ' AI will possibly replace customer service jobs,'
38
+ - ' AIs interactive capabilities feel almost human,'
39
+ - ' AI enhances my capability to manage diverse projects,'
40
  ---
41
 
42
  # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
 
89
  model = SentenceTransformer("zihoo/all-MiniLM-L6-v2-AINLI")
90
  # Run inference
91
  sentences = [
92
+ ' AIs efficiency in data mining is impressive,',
93
+ ' AI will possibly replace customer service jobs,',
94
+ ' AI enhances my capability to manage diverse projects,',
95
  ]
96
  embeddings = model.encode(sentences)
97
  print(embeddings.shape)
 
154
  | type | string | string | int |
155
  | details | <ul><li>min: 8 tokens</li><li>mean: 11.91 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 11.91 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>0: ~9.00%</li><li>1: ~69.00%</li><li>2: ~22.00%</li></ul> |
156
  * Samples:
157
+ | sentence_0 | sentence_1 | label |
158
+ |:----------------------------------------------------------------|:-------------------------------------------------------------------|:---------------|
159
+ | <code> AI-generated stories feel incredibly human,</code> | <code> Using AI tools makes me feel anxious,</code> | <code>2</code> |
160
+ | <code> AI-generated dialogue feels convincingly human,</code> | <code> AI could greatly impact the future job landscape,</code> | <code>1</code> |
161
+ | <code> AI helps me manage my schedule more efficiently,</code> | <code> AI can recognize and respond to human-like sarcasm,</code> | <code>1</code> |
162
  * Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
163
 
164
  ### Training Hyperparameters
 
166
 
167
  - `per_device_train_batch_size`: 64
168
  - `per_device_eval_batch_size`: 64
169
+ - `num_train_epochs`: 4
170
  - `multi_dataset_batch_sampler`: round_robin
171
 
172
  #### All Hyperparameters
 
188
  - `adam_beta2`: 0.999
189
  - `adam_epsilon`: 1e-08
190
  - `max_grad_norm`: 1
191
+ - `num_train_epochs`: 4
192
  - `max_steps`: -1
193
  - `lr_scheduler_type`: linear
194
  - `lr_scheduler_kwargs`: {}
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8d11d89d38b73dc9bbee44dd082835092283a177546e02e2ff87748ef6863ac0
3
  size 90864192
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:061e1a54c00a54ad184f6f1a8bf0a296b3b5bc09eadf099a2e9d0b298881ac67
3
  size 90864192