duyphu commited on
Commit
617e943
·
verified ·
1 Parent(s): ecb4c24

End of training

Browse files
Files changed (2) hide show
  1. README.md +4 -11
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -66,7 +66,7 @@ lora_model_dir: null
66
  lora_r: 8
67
  lora_target_linear: true
68
  lr_scheduler: cosine
69
- max_steps: 50
70
  micro_batch_size: 2
71
  mlflow_experiment_name: /tmp/4eba0df0a656dd91_train_data.json
72
  model_type: AutoModelForCausalLM
@@ -93,7 +93,7 @@ wandb_name: 4e5efe2f-5960-4644-9e88-198303f5d2db
93
  wandb_project: Gradients-On-Demand
94
  wandb_run: your_name
95
  wandb_runid: 4e5efe2f-5960-4644-9e88-198303f5d2db
96
- warmup_steps: 10
97
  weight_decay: 0.0
98
  xformers_attention: null
99
 
@@ -104,8 +104,6 @@ xformers_attention: null
104
  # 4e5efe2f-5960-4644-9e88-198303f5d2db
105
 
106
  This model is a fine-tuned version of [elyza/Llama-3-ELYZA-JP-8B](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B) on the None dataset.
107
- It achieves the following results on the evaluation set:
108
- - Loss: 0.0094
109
 
110
  ## Model description
111
 
@@ -132,19 +130,14 @@ The following hyperparameters were used during training:
132
  - total_train_batch_size: 8
133
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
134
  - lr_scheduler_type: cosine
135
- - lr_scheduler_warmup_steps: 10
136
- - training_steps: 50
137
 
138
  ### Training results
139
 
140
  | Training Loss | Epoch | Step | Validation Loss |
141
  |:-------------:|:------:|:----:|:---------------:|
142
  | No log | 0.0006 | 1 | 0.7707 |
143
- | 0.5867 | 0.0064 | 10 | 0.3562 |
144
- | 0.1011 | 0.0127 | 20 | 0.0508 |
145
- | 0.02 | 0.0191 | 30 | 0.0161 |
146
- | 0.0121 | 0.0254 | 40 | 0.0101 |
147
- | 0.0093 | 0.0318 | 50 | 0.0094 |
148
 
149
 
150
  ### Framework versions
 
66
  lora_r: 8
67
  lora_target_linear: true
68
  lr_scheduler: cosine
69
+ max_steps: 1
70
  micro_batch_size: 2
71
  mlflow_experiment_name: /tmp/4eba0df0a656dd91_train_data.json
72
  model_type: AutoModelForCausalLM
 
93
  wandb_project: Gradients-On-Demand
94
  wandb_run: your_name
95
  wandb_runid: 4e5efe2f-5960-4644-9e88-198303f5d2db
96
+ warmup_steps: 1
97
  weight_decay: 0.0
98
  xformers_attention: null
99
 
 
104
  # 4e5efe2f-5960-4644-9e88-198303f5d2db
105
 
106
  This model is a fine-tuned version of [elyza/Llama-3-ELYZA-JP-8B](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B) on the None dataset.
 
 
107
 
108
  ## Model description
109
 
 
130
  - total_train_batch_size: 8
131
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
132
  - lr_scheduler_type: cosine
133
+ - lr_scheduler_warmup_steps: 2
134
+ - training_steps: 1
135
 
136
  ### Training results
137
 
138
  | Training Loss | Epoch | Step | Validation Loss |
139
  |:-------------:|:------:|:----:|:---------------:|
140
  | No log | 0.0006 | 1 | 0.7707 |
 
 
 
 
 
141
 
142
 
143
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2ee03bb78c5e295f7d4f8a40dc8d9ee53d28ea200a5fa116a4994f30d76ec8c7
3
  size 84047370
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf78820d10097bbd51117b3f2f36e705a68ac71fbce6e4f007acbb7a83023374
3
  size 84047370