tuanna08go commited on
Commit
a439800
·
verified ·
1 Parent(s): 867fec4

End of training

Browse files
Files changed (2) hide show
  1. README.md +11 -4
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -64,7 +64,7 @@ lora_model_dir: null
64
  lora_r: 8
65
  lora_target_linear: true
66
  lr_scheduler: cosine
67
- max_steps: 1
68
  micro_batch_size: 2
69
  mlflow_experiment_name: /tmp/5aa481aafe9887da_train_data.json
70
  model_type: AutoModelForCausalLM
@@ -89,7 +89,7 @@ wandb_name: ed74e10b-b01f-41f6-8404-7bad9053d62b
89
  wandb_project: Gradients-On-Demand
90
  wandb_run: your_name
91
  wandb_runid: ed74e10b-b01f-41f6-8404-7bad9053d62b
92
- warmup_steps: 1
93
  weight_decay: 0.0
94
  xformers_attention: null
95
 
@@ -100,6 +100,8 @@ xformers_attention: null
100
  # ed74e10b-b01f-41f6-8404-7bad9053d62b
101
 
102
  This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-64k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-64k) on the None dataset.
 
 
103
 
104
  ## Model description
105
 
@@ -126,14 +128,19 @@ The following hyperparameters were used during training:
126
  - total_train_batch_size: 8
127
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
128
  - lr_scheduler_type: cosine
129
- - lr_scheduler_warmup_steps: 2
130
- - training_steps: 1
131
 
132
  ### Training results
133
 
134
  | Training Loss | Epoch | Step | Validation Loss |
135
  |:-------------:|:------:|:----:|:---------------:|
136
  | No log | 0.0033 | 1 | 1.5814 |
 
 
 
 
 
137
 
138
 
139
  ### Framework versions
 
64
  lora_r: 8
65
  lora_target_linear: true
66
  lr_scheduler: cosine
67
+ max_steps: 50
68
  micro_batch_size: 2
69
  mlflow_experiment_name: /tmp/5aa481aafe9887da_train_data.json
70
  model_type: AutoModelForCausalLM
 
89
  wandb_project: Gradients-On-Demand
90
  wandb_run: your_name
91
  wandb_runid: ed74e10b-b01f-41f6-8404-7bad9053d62b
92
+ warmup_steps: 10
93
  weight_decay: 0.0
94
  xformers_attention: null
95
 
 
100
  # ed74e10b-b01f-41f6-8404-7bad9053d62b
101
 
102
  This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-64k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-64k) on the None dataset.
103
+ It achieves the following results on the evaluation set:
104
+ - Loss: 1.3887
105
 
106
  ## Model description
107
 
 
128
  - total_train_batch_size: 8
129
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
130
  - lr_scheduler_type: cosine
131
+ - lr_scheduler_warmup_steps: 10
132
+ - training_steps: 50
133
 
134
  ### Training results
135
 
136
  | Training Loss | Epoch | Step | Validation Loss |
137
  |:-------------:|:------:|:----:|:---------------:|
138
  | No log | 0.0033 | 1 | 1.5814 |
139
+ | 6.2149 | 0.0329 | 10 | 1.5433 |
140
+ | 6.018 | 0.0658 | 20 | 1.4256 |
141
+ | 5.5577 | 0.0987 | 30 | 1.3944 |
142
+ | 5.4495 | 0.1316 | 40 | 1.3895 |
143
+ | 5.0081 | 0.1645 | 50 | 1.3887 |
144
 
145
 
146
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:823dcb5aa96f9a8584540c8a2e5726e0689d214cd113b38d9db6b24afe6fc602
3
  size 80115210
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a345f4a3094fbbd9cfaa0a1f73de9259568fd77f77d181e25d45960274387b1b
3
  size 80115210