mrferr3t commited on
Commit
b7cf438
·
verified ·
1 Parent(s): 4fae396

End of training

Browse files
Files changed (3) hide show
  1. README.md +174 -0
  2. adapter_model.bin +3 -0
  3. adapter_model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ license: mit
4
+ base_model: migtissera/Tess-v2.5-Phi-3-medium-128k-14B
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: f42699fe-8a2a-46de-81d3-a4c4af1fea12
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
17
+ <details><summary>See axolotl config</summary>
18
+
19
+ axolotl version: `0.4.1`
20
+ ```yaml
21
+ adapter: lora
22
+ auto_find_batch_size: true
23
+ base_model: migtissera/Tess-v2.5-Phi-3-medium-128k-14B
24
+ bf16: auto
25
+ chat_template: llama3
26
+ dataloader_num_workers: 12
27
+ dataset_prepared_path: null
28
+ datasets:
29
+ - data_files:
30
+ - fe297105e697bbbb_train_data.json
31
+ ds_type: json
32
+ format: custom
33
+ path: /workspace/input_data/fe297105e697bbbb_train_data.json
34
+ type:
35
+ field_instruction: task
36
+ field_output: solution
37
+ format: '{instruction}'
38
+ no_input_format: '{instruction}'
39
+ system_format: '{system}'
40
+ system_prompt: ''
41
+ debug: null
42
+ deepspeed: null
43
+ early_stopping_patience: 3
44
+ early_stopping_threshold: 0.001
45
+ eval_max_new_tokens: 128
46
+ eval_steps: 40
47
+ flash_attention: false
48
+ fp16: null
49
+ fsdp: null
50
+ fsdp_config: null
51
+ gradient_accumulation_steps: 2
52
+ gradient_checkpointing: false
53
+ group_by_length: false
54
+ hub_model_id: mrferr3t/f42699fe-8a2a-46de-81d3-a4c4af1fea12
55
+ hub_repo: null
56
+ hub_strategy: checkpoint
57
+ hub_token: null
58
+ learning_rate: 0.0003
59
+ load_in_4bit: false
60
+ load_in_8bit: false
61
+ local_rank: null
62
+ logging_steps: 100
63
+ lora_alpha: 16
64
+ lora_dropout: 0.05
65
+ lora_fan_in_fan_out: null
66
+ lora_model_dir: null
67
+ lora_r: 8
68
+ lora_target_linear: true
69
+ lr_scheduler: cosine
70
+ micro_batch_size: 32
71
+ mlflow_experiment_name: /tmp/fe297105e697bbbb_train_data.json
72
+ model_type: AutoModelForCausalLM
73
+ num_epochs: 50
74
+ optimizer: adamw_bnb_8bit
75
+ output_dir: miner_id_24
76
+ pad_to_sequence_len: true
77
+ s2_attention: null
78
+ sample_packing: false
79
+ save_steps: 40
80
+ saves_per_epoch: 0
81
+ sequence_len: 512
82
+ strict: false
83
+ tf32: false
84
+ tokenizer_type: AutoTokenizer
85
+ train_on_inputs: false
86
+ trust_remote_code: true
87
+ val_set_size: 0.05
88
+ wandb_entity: null
89
+ wandb_mode: online
90
+ wandb_name: 3fa43a59-7bfe-43c9-93ae-74585476d2fa
91
+ wandb_project: Gradients-On-Demand
92
+ wandb_run: your_name
93
+ wandb_runid: 3fa43a59-7bfe-43c9-93ae-74585476d2fa
94
+ warmup_ratio: 0.05
95
+ weight_decay: 0.0
96
+ xformers_attention: null
97
+
98
+ ```
99
+
100
+ </details><br>
101
+
102
+ # f42699fe-8a2a-46de-81d3-a4c4af1fea12
103
+
104
+ This model is a fine-tuned version of [migtissera/Tess-v2.5-Phi-3-medium-128k-14B](https://huggingface.co/migtissera/Tess-v2.5-Phi-3-medium-128k-14B) on the None dataset.
105
+ It achieves the following results on the evaluation set:
106
+ - Loss: 0.5931
107
+
108
+ ## Model description
109
+
110
+ More information needed
111
+
112
+ ## Intended uses & limitations
113
+
114
+ More information needed
115
+
116
+ ## Training and evaluation data
117
+
118
+ More information needed
119
+
120
+ ## Training procedure
121
+
122
+ ### Training hyperparameters
123
+
124
+ The following hyperparameters were used during training:
125
+ - learning_rate: 0.0003
126
+ - train_batch_size: 32
127
+ - eval_batch_size: 32
128
+ - seed: 42
129
+ - gradient_accumulation_steps: 2
130
+ - total_train_batch_size: 64
131
+ - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
132
+ - lr_scheduler_type: cosine
133
+ - lr_scheduler_warmup_steps: 150
134
+ - num_epochs: 50
135
+
136
+ ### Training results
137
+
138
+ | Training Loss | Epoch | Step | Validation Loss |
139
+ |:-------------:|:------:|:----:|:---------------:|
140
+ | No log | 0.0010 | 1 | 0.7430 |
141
+ | No log | 0.0415 | 40 | 0.6771 |
142
+ | No log | 0.0829 | 80 | 0.6422 |
143
+ | 1.3417 | 0.1244 | 120 | 0.6342 |
144
+ | 1.3417 | 0.1658 | 160 | 0.6263 |
145
+ | 1.2346 | 0.2073 | 200 | 0.6233 |
146
+ | 1.2346 | 0.2487 | 240 | 0.6192 |
147
+ | 1.2346 | 0.2902 | 280 | 0.6167 |
148
+ | 1.2428 | 0.3316 | 320 | 0.6131 |
149
+ | 1.2428 | 0.3731 | 360 | 0.6113 |
150
+ | 1.1918 | 0.4145 | 400 | 0.6059 |
151
+ | 1.1918 | 0.4560 | 440 | 0.6034 |
152
+ | 1.1918 | 0.4974 | 480 | 0.6010 |
153
+ | 1.2162 | 0.5389 | 520 | 0.6008 |
154
+ | 1.2162 | 0.5803 | 560 | 0.5946 |
155
+ | 1.1867 | 0.6218 | 600 | 0.5968 |
156
+ | 1.1867 | 0.6632 | 640 | 0.5912 |
157
+ | 1.1867 | 0.7047 | 680 | 0.5887 |
158
+ | 1.1763 | 0.7461 | 720 | 0.5855 |
159
+ | 1.1763 | 0.7876 | 760 | 0.5841 |
160
+ | 1.1783 | 0.8290 | 800 | 0.5831 |
161
+ | 1.1783 | 0.8705 | 840 | 0.5802 |
162
+ | 1.1783 | 0.9119 | 880 | 0.5773 |
163
+ | 1.1443 | 0.9534 | 920 | 0.5796 |
164
+ | 1.1443 | 0.9948 | 960 | 0.5779 |
165
+ | 1.0328 | 1.0363 | 1000 | 0.5931 |
166
+
167
+
168
+ ### Framework versions
169
+
170
+ - PEFT 0.13.2
171
+ - Transformers 4.46.0
172
+ - Pytorch 2.3.1+cu121
173
+ - Datasets 3.0.1
174
+ - Tokenizers 0.20.1
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67330fc1eae24f0acbb3de79370b031c16d0c393732dc257650376fe459c0e93
3
+ size 111526858
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e969e52342803b299b7ed319dcf94e15a8c84105f20d3e213c87987284453833
3
  size 111454040
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84b83615f0a59f08366868a536e7bbf1d2c28c50be985795752c1672bbf7b97e
3
  size 111454040