error577 commited on
Commit
3f43ca5
·
verified ·
1 Parent(s): 3974ae7

End of training

Browse files
README.md CHANGED
@@ -47,14 +47,14 @@ flash_attention: true
47
  fp16: null
48
  fsdp: null
49
  fsdp_config: null
50
- gradient_accumulation_steps: 8
51
  gradient_checkpointing: true
52
  group_by_length: false
53
  hub_model_id: error577/3f83038b-0f7c-401f-b477-320813c2d642
54
  hub_repo: null
55
  hub_strategy: end
56
  hub_token: null
57
- learning_rate: 0.0001
58
  load_in_4bit: true
59
  load_in_8bit: false
60
  local_rank: null
@@ -66,11 +66,11 @@ lora_model_dir: null
66
  lora_r: 32
67
  lora_target_linear: true
68
  lr_scheduler: cosine
69
- max_steps: 100
70
- micro_batch_size: 1
71
  mlflow_experiment_name: /tmp/db79916eba4ff997_train_data.json
72
  model_type: AutoModelForCausalLM
73
- num_epochs: 1
74
  optimizer: adamw_bnb_8bit
75
  output_dir: miner_id_24
76
  pad_to_sequence_len: true
@@ -103,7 +103,7 @@ xformers_attention: null
103
 
104
  This model is a fine-tuned version of [unsloth/SmolLM-1.7B](https://huggingface.co/unsloth/SmolLM-1.7B) on the None dataset.
105
  It achieves the following results on the evaluation set:
106
- - Loss: 2.0123
107
 
108
  ## Model description
109
 
@@ -122,22 +122,26 @@ More information needed
122
  ### Training hyperparameters
123
 
124
  The following hyperparameters were used during training:
125
- - learning_rate: 0.0001
126
- - train_batch_size: 1
127
- - eval_batch_size: 1
128
  - seed: 42
129
- - gradient_accumulation_steps: 8
130
  - total_train_batch_size: 8
131
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
132
  - lr_scheduler_type: cosine
133
  - lr_scheduler_warmup_steps: 10
134
- - training_steps: 100
135
 
136
  ### Training results
137
 
138
  | Training Loss | Epoch | Step | Validation Loss |
139
  |:-------------:|:------:|:----:|:---------------:|
140
- | 2.023 | 0.0207 | 100 | 2.0123 |
 
 
 
 
141
 
142
 
143
  ### Framework versions
 
47
  fp16: null
48
  fsdp: null
49
  fsdp_config: null
50
+ gradient_accumulation_steps: 4
51
  gradient_checkpointing: true
52
  group_by_length: false
53
  hub_model_id: error577/3f83038b-0f7c-401f-b477-320813c2d642
54
  hub_repo: null
55
  hub_strategy: end
56
  hub_token: null
57
+ learning_rate: 0.0002
58
  load_in_4bit: true
59
  load_in_8bit: false
60
  local_rank: null
 
66
  lora_r: 32
67
  lora_target_linear: true
68
  lr_scheduler: cosine
69
+ max_steps: 500
70
+ micro_batch_size: 2
71
  mlflow_experiment_name: /tmp/db79916eba4ff997_train_data.json
72
  model_type: AutoModelForCausalLM
73
+ num_epochs: 4
74
  optimizer: adamw_bnb_8bit
75
  output_dir: miner_id_24
76
  pad_to_sequence_len: true
 
103
 
104
  This model is a fine-tuned version of [unsloth/SmolLM-1.7B](https://huggingface.co/unsloth/SmolLM-1.7B) on the None dataset.
105
  It achieves the following results on the evaluation set:
106
+ - Loss: 1.6602
107
 
108
  ## Model description
109
 
 
122
  ### Training hyperparameters
123
 
124
  The following hyperparameters were used during training:
125
+ - learning_rate: 0.0002
126
+ - train_batch_size: 2
127
+ - eval_batch_size: 2
128
  - seed: 42
129
+ - gradient_accumulation_steps: 4
130
  - total_train_batch_size: 8
131
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
132
  - lr_scheduler_type: cosine
133
  - lr_scheduler_warmup_steps: 10
134
+ - training_steps: 500
135
 
136
  ### Training results
137
 
138
  | Training Loss | Epoch | Step | Validation Loss |
139
  |:-------------:|:------:|:----:|:---------------:|
140
+ | 2.6068 | 0.0002 | 1 | 2.9328 |
141
+ | 1.4663 | 0.0259 | 125 | 1.8055 |
142
+ | 1.7349 | 0.0518 | 250 | 1.7140 |
143
+ | 1.6004 | 0.0776 | 375 | 1.6695 |
144
+ | 1.3134 | 0.1035 | 500 | 1.6602 |
145
 
146
 
147
  ### Framework versions
adapter_config.json CHANGED
@@ -20,12 +20,12 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
 
 
 
23
  "down_proj",
24
  "k_proj",
25
- "gate_proj",
26
- "up_proj",
27
  "o_proj",
28
- "q_proj",
29
  "v_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
+ "up_proj",
24
+ "gate_proj",
25
+ "q_proj",
26
  "down_proj",
27
  "k_proj",
 
 
28
  "o_proj",
 
29
  "v_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ed7b95649d746d4bdd6dd402a67c03d4248a82c23144097e7ee6569920fb32c0
3
  size 144824970
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2184d3168dce96eb5894e14ae2517797d5ef3243226c3723e42f9c1bacb2aa32
3
  size 144824970
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:48ca79489b3bf8495c0ff1ebcef649a216c871fb625c7fff3b24e5c719fba746
3
  size 144748392
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91b657d35e3e230451966ea90ae95be9738f2090b7ea70e812e0c065e9c82986
3
  size 144748392
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dcca1f706771354531ef976c33d43361a116644dfb568c00352291242330cb4d
3
  size 6776
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:923e314eb82c2038d10a0de5b06d6fa3b32d9426f9ffb404175c90c23b93899d
3
  size 6776