SystemAdmin123 commited on
Commit
f0c17a7
·
verified ·
1 Parent(s): 9a56763

End of training

Browse files
Files changed (2) hide show
  1. README.md +139 -0
  2. generation_config.json +8 -0
README.md ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: JackFram/llama-68m
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ datasets:
9
+ - argilla/databricks-dolly-15k-curated-en
10
+ model-index:
11
+ - name: llama-68m
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
19
+ <details><summary>See axolotl config</summary>
20
+
21
+ axolotl version: `0.6.0`
22
+ ```yaml
23
+ base_model: JackFram/llama-68m
24
+ batch_size: 64
25
+ bf16: true
26
+ chat_template: tokenizer_default_fallback_alpaca
27
+ datasets:
28
+ - format: custom
29
+ path: argilla/databricks-dolly-15k-curated-en
30
+ type:
31
+ field_input: original-instruction
32
+ field_instruction: original-instruction
33
+ field_output: original-response
34
+ format: '{instruction} {input}'
35
+ no_input_format: '{instruction}'
36
+ system_format: '{system}'
37
+ system_prompt: ''
38
+ device_map: auto
39
+ eval_sample_packing: false
40
+ eval_steps: 50
41
+ flash_attention: true
42
+ gradient_checkpointing: true
43
+ group_by_length: true
44
+ hub_model_id: SystemAdmin123/llama-68m
45
+ hub_strategy: checkpoint
46
+ learning_rate: 0.0002
47
+ logging_steps: 10
48
+ lr_scheduler: cosine
49
+ max_steps: 5000
50
+ micro_batch_size: 32
51
+ model_type: AutoModelForCausalLM
52
+ num_epochs: 100
53
+ optimizer: adamw_bnb_8bit
54
+ output_dir: /root/.sn56/axolotl/tmp/llama-68m
55
+ pad_to_sequence_len: true
56
+ resize_token_embeddings_to_32x: false
57
+ sample_packing: true
58
+ save_steps: 50
59
+ save_total_limit: 2
60
+ sequence_len: 2048
61
+ special_tokens:
62
+ pad_token: </s>
63
+ tokenizer_type: LlamaTokenizerFast
64
+ torch_dtype: bf16
65
+ trust_remote_code: true
66
+ val_set_size: 0.1
67
+ wandb_entity: ''
68
+ wandb_mode: online
69
+ wandb_name: JackFram/llama-68m-argilla/databricks-dolly-15k-curated-en
70
+ wandb_project: Gradients-On-Demand
71
+ wandb_run: your_name
72
+ wandb_runid: default
73
+ warmup_ratio: 0.05
74
+
75
+ ```
76
+
77
+ </details><br>
78
+
79
+ # llama-68m
80
+
81
+ This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the argilla/databricks-dolly-15k-curated-en dataset.
82
+ It achieves the following results on the evaluation set:
83
+ - Loss: 4.0103
84
+
85
+ ## Model description
86
+
87
+ More information needed
88
+
89
+ ## Intended uses & limitations
90
+
91
+ More information needed
92
+
93
+ ## Training and evaluation data
94
+
95
+ More information needed
96
+
97
+ ## Training procedure
98
+
99
+ ### Training hyperparameters
100
+
101
+ The following hyperparameters were used during training:
102
+ - learning_rate: 0.0002
103
+ - train_batch_size: 32
104
+ - eval_batch_size: 32
105
+ - seed: 42
106
+ - distributed_type: multi-GPU
107
+ - num_devices: 2
108
+ - total_train_batch_size: 64
109
+ - total_eval_batch_size: 64
110
+ - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
111
+ - lr_scheduler_type: cosine
112
+ - lr_scheduler_warmup_steps: 30
113
+ - training_steps: 600
114
+
115
+ ### Training results
116
+
117
+ | Training Loss | Epoch | Step | Validation Loss |
118
+ |:-------------:|:-------:|:----:|:---------------:|
119
+ | No log | 0.0769 | 1 | 3.9168 |
120
+ | 2.5978 | 3.8462 | 50 | 2.8149 |
121
+ | 2.0808 | 7.6923 | 100 | 2.9664 |
122
+ | 1.6294 | 11.5385 | 150 | 3.2337 |
123
+ | 1.2699 | 15.3846 | 200 | 3.5217 |
124
+ | 1.0092 | 19.2308 | 250 | 3.7262 |
125
+ | 0.8392 | 23.0769 | 300 | 3.8683 |
126
+ | 0.7428 | 26.9231 | 350 | 3.9435 |
127
+ | 0.6952 | 30.7692 | 400 | 3.9860 |
128
+ | 0.6762 | 34.6154 | 450 | 3.9990 |
129
+ | 0.6739 | 38.4615 | 500 | 4.0167 |
130
+ | 0.6691 | 42.3077 | 550 | 4.0208 |
131
+ | 0.6667 | 46.1538 | 600 | 4.0103 |
132
+
133
+
134
+ ### Framework versions
135
+
136
+ - Transformers 4.48.1
137
+ - Pytorch 2.5.1+cu124
138
+ - Datasets 3.2.0
139
+ - Tokenizers 0.21.0
generation_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "do_sample": true,
5
+ "eos_token_id": 2,
6
+ "pad_token_id": 1,
7
+ "transformers_version": "4.48.1"
8
+ }