error577 commited on
Commit
7dc248f
·
verified ·
1 Parent(s): c18f923

End of training

Browse files
Files changed (2) hide show
  1. README.md +8 -13
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -65,11 +65,11 @@ lora_model_dir: null
65
  lora_r: 4
66
  lora_target_linear: true
67
  lr_scheduler: cosine
68
- max_steps: 10
69
  micro_batch_size: 1
70
  mlflow_experiment_name: /tmp/45fb2d361254b178_train_data.json
71
  model_type: AutoModelForCausalLM
72
- num_epochs: 4
73
  optimizer: adamw_bnb_8bit
74
  output_dir: miner_id_24
75
  pad_to_sequence_len: true
@@ -104,7 +104,7 @@ xformers_attention: null
104
 
105
  This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf](https://huggingface.co/NousResearch/CodeLlama-7b-hf) on the None dataset.
106
  It achieves the following results on the evaluation set:
107
- - Loss: 1.3355
108
 
109
  ## Model description
110
 
@@ -132,22 +132,17 @@ The following hyperparameters were used during training:
132
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
133
  - lr_scheduler_type: cosine
134
  - lr_scheduler_warmup_steps: 10
135
- - training_steps: 10
136
 
137
  ### Training results
138
 
139
  | Training Loss | Epoch | Step | Validation Loss |
140
  |:-------------:|:------:|:----:|:---------------:|
141
  | 18.8184 | 0.0007 | 1 | 2.6565 |
142
- | 24.6349 | 0.0015 | 2 | 2.6549 |
143
- | 19.4332 | 0.0022 | 3 | 2.6461 |
144
- | 19.745 | 0.0030 | 4 | 2.6241 |
145
- | 20.1983 | 0.0037 | 5 | 2.5784 |
146
- | 19.7283 | 0.0044 | 6 | 2.4844 |
147
- | 16.9993 | 0.0052 | 7 | 2.3075 |
148
- | 19.1259 | 0.0059 | 8 | 2.0358 |
149
- | 20.2162 | 0.0067 | 9 | 1.7064 |
150
- | 11.843 | 0.0074 | 10 | 1.3355 |
151
 
152
 
153
  ### Framework versions
 
65
  lora_r: 4
66
  lora_target_linear: true
67
  lr_scheduler: cosine
68
+ max_steps: 20
69
  micro_batch_size: 1
70
  mlflow_experiment_name: /tmp/45fb2d361254b178_train_data.json
71
  model_type: AutoModelForCausalLM
72
+ num_epochs: 1
73
  optimizer: adamw_bnb_8bit
74
  output_dir: miner_id_24
75
  pad_to_sequence_len: true
 
104
 
105
  This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf](https://huggingface.co/NousResearch/CodeLlama-7b-hf) on the None dataset.
106
  It achieves the following results on the evaluation set:
107
+ - Loss: 0.3416
108
 
109
  ## Model description
110
 
 
132
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
133
  - lr_scheduler_type: cosine
134
  - lr_scheduler_warmup_steps: 10
135
+ - training_steps: 20
136
 
137
  ### Training results
138
 
139
  | Training Loss | Epoch | Step | Validation Loss |
140
  |:-------------:|:------:|:----:|:---------------:|
141
  | 18.8184 | 0.0007 | 1 | 2.6565 |
142
+ | 20.2215 | 0.0037 | 5 | 2.5815 |
143
+ | 11.6753 | 0.0074 | 10 | 1.3255 |
144
+ | 2.0407 | 0.0111 | 15 | 0.4048 |
145
+ | 3.0463 | 0.0148 | 20 | 0.3416 |
 
 
 
 
 
146
 
147
 
148
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a0c9202d9dc89c3b0d506c40bdab2e7d5a41568f48807e519a611cb587c2a320
3
  size 40138058
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:413040f162c50bd4af4c0b57c3e1139fb02cccacb669ca62ba98ff3b19d1586f
3
  size 40138058