FatCat87 commited on
Commit
0f1bf52
·
verified ·
1 Parent(s): c75de9b

Upload folder using huggingface_hub

Browse files
checkpoint-196/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ base_model: NousResearch/CodeLlama-7b-hf
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.11.1
checkpoint-196/adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "NousResearch/CodeLlama-7b-hf",
5
+ "bias": "none",
6
+ "fan_in_fan_out": null,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0.05,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 32,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "gate_proj",
24
+ "v_proj",
25
+ "o_proj",
26
+ "k_proj",
27
+ "up_proj",
28
+ "down_proj",
29
+ "q_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "use_dora": false,
33
+ "use_rslora": false
34
+ }
checkpoint-196/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d864f7c13ddab5d0f3b7278bb1a3d64f11cf60e64194d5ce005452c9d18ed413
3
+ size 319876032
checkpoint-196/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3ce267bd5a7479a2ce0e0b6bb51d5eef84024cb0d2429379c06448b29e21a39
3
+ size 160736084
checkpoint-196/rng_state_0.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d107a6144e78522b1d6f429faba280da16f2b609387ac758a998c16f053d803e
3
+ size 14960
checkpoint-196/rng_state_1.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:849b02b883c17f420e0f4f92078172b190f770f39a47971c5631de03bb26519f
3
+ size 14960
checkpoint-196/rng_state_2.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f656a2486ff1cf688db4ff13ce8dc019fe41d0342657d208668bb9a6cb5c0cd7
3
+ size 14960
checkpoint-196/rng_state_3.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4668e970ff90d1e8dea5510be594298fe9c19d655c1578f0ffce328a9f2c1463
3
+ size 14960
checkpoint-196/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e22ca0a50bab80d00c8b8910bffb983a348f8762b7cf025e6f8e64a05a938289
3
+ size 1064
checkpoint-196/special_tokens_map.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "▁<PRE>",
4
+ "▁<MID>",
5
+ "▁<SUF>",
6
+ "▁<EOT>"
7
+ ],
8
+ "bos_token": {
9
+ "content": "<s>",
10
+ "lstrip": false,
11
+ "normalized": false,
12
+ "rstrip": false,
13
+ "single_word": false
14
+ },
15
+ "eos_token": {
16
+ "content": "</s>",
17
+ "lstrip": false,
18
+ "normalized": false,
19
+ "rstrip": false,
20
+ "single_word": false
21
+ },
22
+ "pad_token": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false
28
+ },
29
+ "unk_token": {
30
+ "content": "<unk>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false
35
+ }
36
+ }
checkpoint-196/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45ccb9c8b6b561889acea59191d66986d314e7cbd6a78abc6e49b139ca91c1e6
3
+ size 500058
checkpoint-196/tokenizer_config.json ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "32007": {
30
+ "content": "▁<PRE>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "32008": {
38
+ "content": "▁<SUF>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "32009": {
46
+ "content": "▁<MID>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "32010": {
54
+ "content": "▁<EOT>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ }
61
+ },
62
+ "additional_special_tokens": [
63
+ "▁<PRE>",
64
+ "▁<MID>",
65
+ "▁<SUF>",
66
+ "▁<EOT>"
67
+ ],
68
+ "bos_token": "<s>",
69
+ "clean_up_tokenization_spaces": false,
70
+ "eos_token": "</s>",
71
+ "eot_token": "▁<EOT>",
72
+ "fill_token": "<FILL_ME>",
73
+ "legacy": null,
74
+ "middle_token": "▁<MID>",
75
+ "model_max_length": 1000000000000000019884624838656,
76
+ "pad_token": "</s>",
77
+ "prefix_token": "▁<PRE>",
78
+ "sp_model_kwargs": {},
79
+ "suffix_first": false,
80
+ "suffix_token": "▁<SUF>",
81
+ "tokenizer_class": "CodeLlamaTokenizer",
82
+ "unk_token": "<unk>",
83
+ "use_default_system_prompt": false,
84
+ "use_fast": true
85
+ }
checkpoint-196/trainer_state.json ADDED
@@ -0,0 +1,1429 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 1.9644670050761421,
5
+ "eval_steps": 98,
6
+ "global_step": 196,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.01015228426395939,
13
+ "grad_norm": 0.516016960144043,
14
+ "learning_rate": 2e-05,
15
+ "loss": 4.3699,
16
+ "step": 1
17
+ },
18
+ {
19
+ "epoch": 0.01015228426395939,
20
+ "eval_loss": 4.561354637145996,
21
+ "eval_runtime": 58.1653,
22
+ "eval_samples_per_second": 10.144,
23
+ "eval_steps_per_second": 1.272,
24
+ "step": 1
25
+ },
26
+ {
27
+ "epoch": 0.02030456852791878,
28
+ "grad_norm": 0.5144633650779724,
29
+ "learning_rate": 4e-05,
30
+ "loss": 4.5708,
31
+ "step": 2
32
+ },
33
+ {
34
+ "epoch": 0.030456852791878174,
35
+ "grad_norm": 0.5246109962463379,
36
+ "learning_rate": 6e-05,
37
+ "loss": 4.5871,
38
+ "step": 3
39
+ },
40
+ {
41
+ "epoch": 0.04060913705583756,
42
+ "grad_norm": 0.5753014087677002,
43
+ "learning_rate": 8e-05,
44
+ "loss": 4.3818,
45
+ "step": 4
46
+ },
47
+ {
48
+ "epoch": 0.050761421319796954,
49
+ "grad_norm": 0.6615588665008545,
50
+ "learning_rate": 0.0001,
51
+ "loss": 4.448,
52
+ "step": 5
53
+ },
54
+ {
55
+ "epoch": 0.06091370558375635,
56
+ "grad_norm": 0.751004159450531,
57
+ "learning_rate": 0.00012,
58
+ "loss": 4.4964,
59
+ "step": 6
60
+ },
61
+ {
62
+ "epoch": 0.07106598984771574,
63
+ "grad_norm": 0.886650562286377,
64
+ "learning_rate": 0.00014,
65
+ "loss": 4.4011,
66
+ "step": 7
67
+ },
68
+ {
69
+ "epoch": 0.08121827411167512,
70
+ "grad_norm": 1.1175657510757446,
71
+ "learning_rate": 0.00016,
72
+ "loss": 3.9849,
73
+ "step": 8
74
+ },
75
+ {
76
+ "epoch": 0.09137055837563451,
77
+ "grad_norm": 0.9861809015274048,
78
+ "learning_rate": 0.00018,
79
+ "loss": 3.8218,
80
+ "step": 9
81
+ },
82
+ {
83
+ "epoch": 0.10152284263959391,
84
+ "grad_norm": 1.1107577085494995,
85
+ "learning_rate": 0.0002,
86
+ "loss": 3.6451,
87
+ "step": 10
88
+ },
89
+ {
90
+ "epoch": 0.1116751269035533,
91
+ "grad_norm": 0.9435870051383972,
92
+ "learning_rate": 0.000199985736255971,
93
+ "loss": 3.5619,
94
+ "step": 11
95
+ },
96
+ {
97
+ "epoch": 0.1218274111675127,
98
+ "grad_norm": 0.9529088139533997,
99
+ "learning_rate": 0.0001999429490929718,
100
+ "loss": 3.4561,
101
+ "step": 12
102
+ },
103
+ {
104
+ "epoch": 0.1319796954314721,
105
+ "grad_norm": 1.3805211782455444,
106
+ "learning_rate": 0.00019987165071710527,
107
+ "loss": 3.2067,
108
+ "step": 13
109
+ },
110
+ {
111
+ "epoch": 0.14213197969543148,
112
+ "grad_norm": 1.319393515586853,
113
+ "learning_rate": 0.00019977186146800707,
114
+ "loss": 3.0656,
115
+ "step": 14
116
+ },
117
+ {
118
+ "epoch": 0.15228426395939088,
119
+ "grad_norm": 1.061409592628479,
120
+ "learning_rate": 0.0001996436098130433,
121
+ "loss": 2.7711,
122
+ "step": 15
123
+ },
124
+ {
125
+ "epoch": 0.16243654822335024,
126
+ "grad_norm": 1.036845326423645,
127
+ "learning_rate": 0.00019948693233918952,
128
+ "loss": 2.6576,
129
+ "step": 16
130
+ },
131
+ {
132
+ "epoch": 0.17258883248730963,
133
+ "grad_norm": 1.0924557447433472,
134
+ "learning_rate": 0.00019930187374259337,
135
+ "loss": 2.4101,
136
+ "step": 17
137
+ },
138
+ {
139
+ "epoch": 0.18274111675126903,
140
+ "grad_norm": 1.0557212829589844,
141
+ "learning_rate": 0.00019908848681582391,
142
+ "loss": 2.2991,
143
+ "step": 18
144
+ },
145
+ {
146
+ "epoch": 0.19289340101522842,
147
+ "grad_norm": 1.1735273599624634,
148
+ "learning_rate": 0.00019884683243281116,
149
+ "loss": 2.3991,
150
+ "step": 19
151
+ },
152
+ {
153
+ "epoch": 0.20304568527918782,
154
+ "grad_norm": 0.8104203343391418,
155
+ "learning_rate": 0.00019857697953148037,
156
+ "loss": 2.107,
157
+ "step": 20
158
+ },
159
+ {
160
+ "epoch": 0.2131979695431472,
161
+ "grad_norm": 0.7275764346122742,
162
+ "learning_rate": 0.00019827900509408581,
163
+ "loss": 2.0596,
164
+ "step": 21
165
+ },
166
+ {
167
+ "epoch": 0.2233502538071066,
168
+ "grad_norm": 1.0672590732574463,
169
+ "learning_rate": 0.00019795299412524945,
170
+ "loss": 1.9543,
171
+ "step": 22
172
+ },
173
+ {
174
+ "epoch": 0.233502538071066,
175
+ "grad_norm": 0.5848283767700195,
176
+ "learning_rate": 0.00019759903962771156,
177
+ "loss": 1.8091,
178
+ "step": 23
179
+ },
180
+ {
181
+ "epoch": 0.2436548223350254,
182
+ "grad_norm": 0.9580035209655762,
183
+ "learning_rate": 0.00019721724257579907,
184
+ "loss": 1.7178,
185
+ "step": 24
186
+ },
187
+ {
188
+ "epoch": 0.25380710659898476,
189
+ "grad_norm": 0.5362741351127625,
190
+ "learning_rate": 0.00019680771188662044,
191
+ "loss": 1.6739,
192
+ "step": 25
193
+ },
194
+ {
195
+ "epoch": 0.2639593908629442,
196
+ "grad_norm": 0.5108774304389954,
197
+ "learning_rate": 0.0001963705643889941,
198
+ "loss": 1.7575,
199
+ "step": 26
200
+ },
201
+ {
202
+ "epoch": 0.27411167512690354,
203
+ "grad_norm": 0.5604164004325867,
204
+ "learning_rate": 0.00019590592479012023,
205
+ "loss": 1.6815,
206
+ "step": 27
207
+ },
208
+ {
209
+ "epoch": 0.28426395939086296,
210
+ "grad_norm": 0.7223322987556458,
211
+ "learning_rate": 0.00019541392564000488,
212
+ "loss": 1.6213,
213
+ "step": 28
214
+ },
215
+ {
216
+ "epoch": 0.29441624365482233,
217
+ "grad_norm": 0.5081471800804138,
218
+ "learning_rate": 0.00019489470729364692,
219
+ "loss": 1.5935,
220
+ "step": 29
221
+ },
222
+ {
223
+ "epoch": 0.30456852791878175,
224
+ "grad_norm": 0.5000993013381958,
225
+ "learning_rate": 0.00019434841787099803,
226
+ "loss": 1.6237,
227
+ "step": 30
228
+ },
229
+ {
230
+ "epoch": 0.3147208121827411,
231
+ "grad_norm": 0.45925211906433105,
232
+ "learning_rate": 0.00019377521321470805,
233
+ "loss": 1.6201,
234
+ "step": 31
235
+ },
236
+ {
237
+ "epoch": 0.3248730964467005,
238
+ "grad_norm": 0.38572826981544495,
239
+ "learning_rate": 0.00019317525684566685,
240
+ "loss": 1.4805,
241
+ "step": 32
242
+ },
243
+ {
244
+ "epoch": 0.3350253807106599,
245
+ "grad_norm": 0.28524091839790344,
246
+ "learning_rate": 0.00019254871991635598,
247
+ "loss": 1.5985,
248
+ "step": 33
249
+ },
250
+ {
251
+ "epoch": 0.34517766497461927,
252
+ "grad_norm": 0.3277890980243683,
253
+ "learning_rate": 0.00019189578116202307,
254
+ "loss": 1.4994,
255
+ "step": 34
256
+ },
257
+ {
258
+ "epoch": 0.3553299492385787,
259
+ "grad_norm": 0.3320370018482208,
260
+ "learning_rate": 0.00019121662684969335,
261
+ "loss": 1.5039,
262
+ "step": 35
263
+ },
264
+ {
265
+ "epoch": 0.36548223350253806,
266
+ "grad_norm": 0.2798719108104706,
267
+ "learning_rate": 0.00019051145072503215,
268
+ "loss": 1.4997,
269
+ "step": 36
270
+ },
271
+ {
272
+ "epoch": 0.3756345177664975,
273
+ "grad_norm": 1.7497050762176514,
274
+ "learning_rate": 0.00018978045395707418,
275
+ "loss": 1.5465,
276
+ "step": 37
277
+ },
278
+ {
279
+ "epoch": 0.38578680203045684,
280
+ "grad_norm": 0.27379170060157776,
281
+ "learning_rate": 0.00018902384508083517,
282
+ "loss": 1.4846,
283
+ "step": 38
284
+ },
285
+ {
286
+ "epoch": 0.39593908629441626,
287
+ "grad_norm": 0.36681699752807617,
288
+ "learning_rate": 0.00018824183993782192,
289
+ "loss": 1.4154,
290
+ "step": 39
291
+ },
292
+ {
293
+ "epoch": 0.40609137055837563,
294
+ "grad_norm": 0.45136329531669617,
295
+ "learning_rate": 0.00018743466161445823,
296
+ "loss": 1.3928,
297
+ "step": 40
298
+ },
299
+ {
300
+ "epoch": 0.41624365482233505,
301
+ "grad_norm": 0.27879664301872253,
302
+ "learning_rate": 0.00018660254037844388,
303
+ "loss": 1.5119,
304
+ "step": 41
305
+ },
306
+ {
307
+ "epoch": 0.4263959390862944,
308
+ "grad_norm": 0.29230332374572754,
309
+ "learning_rate": 0.0001857457136130651,
310
+ "loss": 1.5095,
311
+ "step": 42
312
+ },
313
+ {
314
+ "epoch": 0.4365482233502538,
315
+ "grad_norm": 0.2731008231639862,
316
+ "learning_rate": 0.00018486442574947511,
317
+ "loss": 1.4672,
318
+ "step": 43
319
+ },
320
+ {
321
+ "epoch": 0.4467005076142132,
322
+ "grad_norm": 0.23685932159423828,
323
+ "learning_rate": 0.00018395892819696389,
324
+ "loss": 1.4173,
325
+ "step": 44
326
+ },
327
+ {
328
+ "epoch": 0.45685279187817257,
329
+ "grad_norm": 0.2703058421611786,
330
+ "learning_rate": 0.00018302947927123766,
331
+ "loss": 1.4088,
332
+ "step": 45
333
+ },
334
+ {
335
+ "epoch": 0.467005076142132,
336
+ "grad_norm": 1.65743887424469,
337
+ "learning_rate": 0.00018207634412072764,
338
+ "loss": 1.4672,
339
+ "step": 46
340
+ },
341
+ {
342
+ "epoch": 0.47715736040609136,
343
+ "grad_norm": 0.21287347376346588,
344
+ "learning_rate": 0.00018109979465095013,
345
+ "loss": 1.3975,
346
+ "step": 47
347
+ },
348
+ {
349
+ "epoch": 0.4873096446700508,
350
+ "grad_norm": 0.3460160791873932,
351
+ "learning_rate": 0.00018010010944693848,
352
+ "loss": 1.4501,
353
+ "step": 48
354
+ },
355
+ {
356
+ "epoch": 0.49746192893401014,
357
+ "grad_norm": 0.4228818714618683,
358
+ "learning_rate": 0.00017907757369376985,
359
+ "loss": 1.4632,
360
+ "step": 49
361
+ },
362
+ {
363
+ "epoch": 0.5076142131979695,
364
+ "grad_norm": 0.46471402049064636,
365
+ "learning_rate": 0.0001780324790952092,
366
+ "loss": 1.3696,
367
+ "step": 50
368
+ },
369
+ {
370
+ "epoch": 0.5177664974619289,
371
+ "grad_norm": 0.35602033138275146,
372
+ "learning_rate": 0.00017696512379049325,
373
+ "loss": 1.4096,
374
+ "step": 51
375
+ },
376
+ {
377
+ "epoch": 0.5279187817258884,
378
+ "grad_norm": 0.2879682779312134,
379
+ "learning_rate": 0.0001758758122692791,
380
+ "loss": 1.337,
381
+ "step": 52
382
+ },
383
+ {
384
+ "epoch": 0.5380710659898477,
385
+ "grad_norm": 0.1947374939918518,
386
+ "learning_rate": 0.00017476485528478093,
387
+ "loss": 1.3815,
388
+ "step": 53
389
+ },
390
+ {
391
+ "epoch": 0.5482233502538071,
392
+ "grad_norm": 0.22819018363952637,
393
+ "learning_rate": 0.00017363256976511972,
394
+ "loss": 1.4021,
395
+ "step": 54
396
+ },
397
+ {
398
+ "epoch": 0.5583756345177665,
399
+ "grad_norm": 0.19164641201496124,
400
+ "learning_rate": 0.000172479278722912,
401
+ "loss": 1.3899,
402
+ "step": 55
403
+ },
404
+ {
405
+ "epoch": 0.5685279187817259,
406
+ "grad_norm": 0.5477288961410522,
407
+ "learning_rate": 0.00017130531116312203,
408
+ "loss": 1.4089,
409
+ "step": 56
410
+ },
411
+ {
412
+ "epoch": 0.5786802030456852,
413
+ "grad_norm": 0.6282036900520325,
414
+ "learning_rate": 0.0001701110019892053,
415
+ "loss": 1.3983,
416
+ "step": 57
417
+ },
418
+ {
419
+ "epoch": 0.5888324873096447,
420
+ "grad_norm": 0.5962779521942139,
421
+ "learning_rate": 0.00016889669190756868,
422
+ "loss": 1.3126,
423
+ "step": 58
424
+ },
425
+ {
426
+ "epoch": 0.5989847715736041,
427
+ "grad_norm": 0.39695534110069275,
428
+ "learning_rate": 0.00016766272733037576,
429
+ "loss": 1.3693,
430
+ "step": 59
431
+ },
432
+ {
433
+ "epoch": 0.6091370558375635,
434
+ "grad_norm": 0.2737330198287964,
435
+ "learning_rate": 0.00016640946027672392,
436
+ "loss": 1.4286,
437
+ "step": 60
438
+ },
439
+ {
440
+ "epoch": 0.6192893401015228,
441
+ "grad_norm": 0.34324145317077637,
442
+ "learning_rate": 0.00016513724827222227,
443
+ "loss": 1.3363,
444
+ "step": 61
445
+ },
446
+ {
447
+ "epoch": 0.6294416243654822,
448
+ "grad_norm": 0.4945085942745209,
449
+ "learning_rate": 0.00016384645424699835,
450
+ "loss": 1.4388,
451
+ "step": 62
452
+ },
453
+ {
454
+ "epoch": 0.6395939086294417,
455
+ "grad_norm": 0.3939533829689026,
456
+ "learning_rate": 0.00016253744643216368,
457
+ "loss": 1.325,
458
+ "step": 63
459
+ },
460
+ {
461
+ "epoch": 0.649746192893401,
462
+ "grad_norm": 0.3593675196170807,
463
+ "learning_rate": 0.0001612105982547663,
464
+ "loss": 1.353,
465
+ "step": 64
466
+ },
467
+ {
468
+ "epoch": 0.6598984771573604,
469
+ "grad_norm": 0.3457062244415283,
470
+ "learning_rate": 0.0001598662882312615,
471
+ "loss": 1.3119,
472
+ "step": 65
473
+ },
474
+ {
475
+ "epoch": 0.6700507614213198,
476
+ "grad_norm": 0.22607868909835815,
477
+ "learning_rate": 0.00015850489985953076,
478
+ "loss": 1.3281,
479
+ "step": 66
480
+ },
481
+ {
482
+ "epoch": 0.6802030456852792,
483
+ "grad_norm": 0.1937730461359024,
484
+ "learning_rate": 0.00015712682150947923,
485
+ "loss": 1.3061,
486
+ "step": 67
487
+ },
488
+ {
489
+ "epoch": 0.6903553299492385,
490
+ "grad_norm": 0.19334916770458221,
491
+ "learning_rate": 0.00015573244631224365,
492
+ "loss": 1.2995,
493
+ "step": 68
494
+ },
495
+ {
496
+ "epoch": 0.700507614213198,
497
+ "grad_norm": 0.43978920578956604,
498
+ "learning_rate": 0.0001543221720480419,
499
+ "loss": 1.3196,
500
+ "step": 69
501
+ },
502
+ {
503
+ "epoch": 0.7106598984771574,
504
+ "grad_norm": 0.20429864525794983,
505
+ "learning_rate": 0.00015289640103269625,
506
+ "loss": 1.3428,
507
+ "step": 70
508
+ },
509
+ {
510
+ "epoch": 0.7208121827411168,
511
+ "grad_norm": 0.2042793482542038,
512
+ "learning_rate": 0.0001514555400028629,
513
+ "loss": 1.2717,
514
+ "step": 71
515
+ },
516
+ {
517
+ "epoch": 0.7309644670050761,
518
+ "grad_norm": 0.2089298814535141,
519
+ "learning_rate": 0.00015000000000000001,
520
+ "loss": 1.2823,
521
+ "step": 72
522
+ },
523
+ {
524
+ "epoch": 0.7411167512690355,
525
+ "grad_norm": 0.29447218775749207,
526
+ "learning_rate": 0.00014853019625310813,
527
+ "loss": 1.2596,
528
+ "step": 73
529
+ },
530
+ {
531
+ "epoch": 0.751269035532995,
532
+ "grad_norm": 0.20766524970531464,
533
+ "learning_rate": 0.0001470465480602756,
534
+ "loss": 1.3421,
535
+ "step": 74
536
+ },
537
+ {
538
+ "epoch": 0.7614213197969543,
539
+ "grad_norm": 0.19240014255046844,
540
+ "learning_rate": 0.0001455494786690634,
541
+ "loss": 1.3871,
542
+ "step": 75
543
+ },
544
+ {
545
+ "epoch": 0.7715736040609137,
546
+ "grad_norm": 0.16677537560462952,
547
+ "learning_rate": 0.00014403941515576344,
548
+ "loss": 1.2507,
549
+ "step": 76
550
+ },
551
+ {
552
+ "epoch": 0.7817258883248731,
553
+ "grad_norm": 0.1933940052986145,
554
+ "learning_rate": 0.00014251678830356408,
555
+ "loss": 1.2792,
556
+ "step": 77
557
+ },
558
+ {
559
+ "epoch": 0.7918781725888325,
560
+ "grad_norm": 0.19050206243991852,
561
+ "learning_rate": 0.00014098203247965875,
562
+ "loss": 1.3017,
563
+ "step": 78
564
+ },
565
+ {
566
+ "epoch": 0.8020304568527918,
567
+ "grad_norm": 0.25748810172080994,
568
+ "learning_rate": 0.00013943558551133186,
569
+ "loss": 1.2879,
570
+ "step": 79
571
+ },
572
+ {
573
+ "epoch": 0.8121827411167513,
574
+ "grad_norm": 0.2314893752336502,
575
+ "learning_rate": 0.0001378778885610576,
576
+ "loss": 1.3692,
577
+ "step": 80
578
+ },
579
+ {
580
+ "epoch": 0.8223350253807107,
581
+ "grad_norm": 0.20771433413028717,
582
+ "learning_rate": 0.00013630938600064747,
583
+ "loss": 1.3268,
584
+ "step": 81
585
+ },
586
+ {
587
+ "epoch": 0.8324873096446701,
588
+ "grad_norm": 0.18968452513217926,
589
+ "learning_rate": 0.00013473052528448201,
590
+ "loss": 1.2663,
591
+ "step": 82
592
+ },
593
+ {
594
+ "epoch": 0.8426395939086294,
595
+ "grad_norm": 0.1978602409362793,
596
+ "learning_rate": 0.0001331417568218636,
597
+ "loss": 1.2968,
598
+ "step": 83
599
+ },
600
+ {
601
+ "epoch": 0.8527918781725888,
602
+ "grad_norm": 0.9941853284835815,
603
+ "learning_rate": 0.00013154353384852558,
604
+ "loss": 1.3187,
605
+ "step": 84
606
+ },
607
+ {
608
+ "epoch": 0.8629441624365483,
609
+ "grad_norm": 0.18706466257572174,
610
+ "learning_rate": 0.00012993631229733582,
611
+ "loss": 1.2808,
612
+ "step": 85
613
+ },
614
+ {
615
+ "epoch": 0.8730964467005076,
616
+ "grad_norm": 0.18098409473896027,
617
+ "learning_rate": 0.00012832055066823038,
618
+ "loss": 1.2246,
619
+ "step": 86
620
+ },
621
+ {
622
+ "epoch": 0.883248730964467,
623
+ "grad_norm": 0.22270160913467407,
624
+ "learning_rate": 0.00012669670989741517,
625
+ "loss": 1.3028,
626
+ "step": 87
627
+ },
628
+ {
629
+ "epoch": 0.8934010152284264,
630
+ "grad_norm": 0.25465860962867737,
631
+ "learning_rate": 0.00012506525322587207,
632
+ "loss": 1.347,
633
+ "step": 88
634
+ },
635
+ {
636
+ "epoch": 0.9035532994923858,
637
+ "grad_norm": 0.23076751828193665,
638
+ "learning_rate": 0.00012342664606720822,
639
+ "loss": 1.3099,
640
+ "step": 89
641
+ },
642
+ {
643
+ "epoch": 0.9137055837563451,
644
+ "grad_norm": 0.19831228256225586,
645
+ "learning_rate": 0.00012178135587488515,
646
+ "loss": 1.278,
647
+ "step": 90
648
+ },
649
+ {
650
+ "epoch": 0.9238578680203046,
651
+ "grad_norm": 0.22052858769893646,
652
+ "learning_rate": 0.00012012985200886602,
653
+ "loss": 1.2165,
654
+ "step": 91
655
+ },
656
+ {
657
+ "epoch": 0.934010152284264,
658
+ "grad_norm": 0.18730390071868896,
659
+ "learning_rate": 0.00011847260560171896,
660
+ "loss": 1.2814,
661
+ "step": 92
662
+ },
663
+ {
664
+ "epoch": 0.9441624365482234,
665
+ "grad_norm": 0.16983264684677124,
666
+ "learning_rate": 0.00011681008942421483,
667
+ "loss": 1.2235,
668
+ "step": 93
669
+ },
670
+ {
671
+ "epoch": 0.9543147208121827,
672
+ "grad_norm": 0.17806044220924377,
673
+ "learning_rate": 0.00011514277775045768,
674
+ "loss": 1.1867,
675
+ "step": 94
676
+ },
677
+ {
678
+ "epoch": 0.9644670050761421,
679
+ "grad_norm": 0.1574580818414688,
680
+ "learning_rate": 0.00011347114622258612,
681
+ "loss": 1.2718,
682
+ "step": 95
683
+ },
684
+ {
685
+ "epoch": 0.9746192893401016,
686
+ "grad_norm": 0.15895454585552216,
687
+ "learning_rate": 0.00011179567171508463,
688
+ "loss": 1.245,
689
+ "step": 96
690
+ },
691
+ {
692
+ "epoch": 0.9847715736040609,
693
+ "grad_norm": 0.22224721312522888,
694
+ "learning_rate": 0.00011011683219874323,
695
+ "loss": 1.2945,
696
+ "step": 97
697
+ },
698
+ {
699
+ "epoch": 0.9949238578680203,
700
+ "grad_norm": 0.16613103449344635,
701
+ "learning_rate": 0.00010843510660430447,
702
+ "loss": 1.3054,
703
+ "step": 98
704
+ },
705
+ {
706
+ "epoch": 0.9949238578680203,
707
+ "eval_loss": 1.249497413635254,
708
+ "eval_runtime": 58.5386,
709
+ "eval_samples_per_second": 10.079,
710
+ "eval_steps_per_second": 1.264,
711
+ "step": 98
712
+ },
713
+ {
714
+ "epoch": 1.0050761421319796,
715
+ "grad_norm": 0.18297390639781952,
716
+ "learning_rate": 0.00010675097468583652,
717
+ "loss": 1.2749,
718
+ "step": 99
719
+ },
720
+ {
721
+ "epoch": 1.015228426395939,
722
+ "grad_norm": 0.1834397166967392,
723
+ "learning_rate": 0.00010506491688387127,
724
+ "loss": 1.3218,
725
+ "step": 100
726
+ },
727
+ {
728
+ "epoch": 1.0253807106598984,
729
+ "grad_norm": 0.37363290786743164,
730
+ "learning_rate": 0.00010337741418834684,
731
+ "loss": 1.2591,
732
+ "step": 101
733
+ },
734
+ {
735
+ "epoch": 1.0101522842639594,
736
+ "grad_norm": 0.14738723635673523,
737
+ "learning_rate": 0.0001016889480013931,
738
+ "loss": 1.2353,
739
+ "step": 102
740
+ },
741
+ {
742
+ "epoch": 1.0203045685279188,
743
+ "grad_norm": 0.17808881402015686,
744
+ "learning_rate": 0.0001,
745
+ "loss": 1.2708,
746
+ "step": 103
747
+ },
748
+ {
749
+ "epoch": 1.0304568527918783,
750
+ "grad_norm": 0.1652560830116272,
751
+ "learning_rate": 9.83110519986069e-05,
752
+ "loss": 1.2261,
753
+ "step": 104
754
+ },
755
+ {
756
+ "epoch": 1.0406091370558375,
757
+ "grad_norm": 0.1601293385028839,
758
+ "learning_rate": 9.662258581165319e-05,
759
+ "loss": 1.2336,
760
+ "step": 105
761
+ },
762
+ {
763
+ "epoch": 1.0507614213197969,
764
+ "grad_norm": 0.18094658851623535,
765
+ "learning_rate": 9.493508311612874e-05,
766
+ "loss": 1.2165,
767
+ "step": 106
768
+ },
769
+ {
770
+ "epoch": 1.0609137055837563,
771
+ "grad_norm": 0.17732879519462585,
772
+ "learning_rate": 9.324902531416349e-05,
773
+ "loss": 1.2647,
774
+ "step": 107
775
+ },
776
+ {
777
+ "epoch": 1.0710659898477157,
778
+ "grad_norm": 0.16203966736793518,
779
+ "learning_rate": 9.156489339569554e-05,
780
+ "loss": 1.2343,
781
+ "step": 108
782
+ },
783
+ {
784
+ "epoch": 1.0812182741116751,
785
+ "grad_norm": 0.21242284774780273,
786
+ "learning_rate": 8.98831678012568e-05,
787
+ "loss": 1.2335,
788
+ "step": 109
789
+ },
790
+ {
791
+ "epoch": 1.0913705583756346,
792
+ "grad_norm": 0.1700202375650406,
793
+ "learning_rate": 8.820432828491542e-05,
794
+ "loss": 1.175,
795
+ "step": 110
796
+ },
797
+ {
798
+ "epoch": 1.101522842639594,
799
+ "grad_norm": 0.1947324275970459,
800
+ "learning_rate": 8.652885377741393e-05,
801
+ "loss": 1.2354,
802
+ "step": 111
803
+ },
804
+ {
805
+ "epoch": 1.1116751269035534,
806
+ "grad_norm": 0.15965348482131958,
807
+ "learning_rate": 8.485722224954237e-05,
808
+ "loss": 1.1937,
809
+ "step": 112
810
+ },
811
+ {
812
+ "epoch": 1.1218274111675126,
813
+ "grad_norm": 0.1767743080854416,
814
+ "learning_rate": 8.31899105757852e-05,
815
+ "loss": 1.2075,
816
+ "step": 113
817
+ },
818
+ {
819
+ "epoch": 1.131979695431472,
820
+ "grad_norm": 0.1793358474969864,
821
+ "learning_rate": 8.15273943982811e-05,
822
+ "loss": 1.2963,
823
+ "step": 114
824
+ },
825
+ {
826
+ "epoch": 1.1421319796954315,
827
+ "grad_norm": 0.17889666557312012,
828
+ "learning_rate": 7.987014799113397e-05,
829
+ "loss": 1.208,
830
+ "step": 115
831
+ },
832
+ {
833
+ "epoch": 1.1522842639593909,
834
+ "grad_norm": 0.16769151389598846,
835
+ "learning_rate": 7.821864412511485e-05,
836
+ "loss": 1.2279,
837
+ "step": 116
838
+ },
839
+ {
840
+ "epoch": 1.1624365482233503,
841
+ "grad_norm": 0.1788126677274704,
842
+ "learning_rate": 7.65733539327918e-05,
843
+ "loss": 1.2341,
844
+ "step": 117
845
+ },
846
+ {
847
+ "epoch": 1.1725888324873097,
848
+ "grad_norm": 0.17543092370033264,
849
+ "learning_rate": 7.493474677412794e-05,
850
+ "loss": 1.3065,
851
+ "step": 118
852
+ },
853
+ {
854
+ "epoch": 1.1827411167512691,
855
+ "grad_norm": 0.18606220185756683,
856
+ "learning_rate": 7.330329010258483e-05,
857
+ "loss": 1.2233,
858
+ "step": 119
859
+ },
860
+ {
861
+ "epoch": 1.1928934010152283,
862
+ "grad_norm": 0.23003295063972473,
863
+ "learning_rate": 7.16794493317696e-05,
864
+ "loss": 1.2107,
865
+ "step": 120
866
+ },
867
+ {
868
+ "epoch": 1.2030456852791878,
869
+ "grad_norm": 0.15619252622127533,
870
+ "learning_rate": 7.006368770266421e-05,
871
+ "loss": 1.1885,
872
+ "step": 121
873
+ },
874
+ {
875
+ "epoch": 1.2131979695431472,
876
+ "grad_norm": 0.22341646254062653,
877
+ "learning_rate": 6.845646615147445e-05,
878
+ "loss": 1.2421,
879
+ "step": 122
880
+ },
881
+ {
882
+ "epoch": 1.2233502538071066,
883
+ "grad_norm": 0.1528923660516739,
884
+ "learning_rate": 6.685824317813643e-05,
885
+ "loss": 1.207,
886
+ "step": 123
887
+ },
888
+ {
889
+ "epoch": 1.233502538071066,
890
+ "grad_norm": 0.15776072442531586,
891
+ "learning_rate": 6.526947471551798e-05,
892
+ "loss": 1.278,
893
+ "step": 124
894
+ },
895
+ {
896
+ "epoch": 1.2436548223350254,
897
+ "grad_norm": 0.1788446009159088,
898
+ "learning_rate": 6.369061399935255e-05,
899
+ "loss": 1.2107,
900
+ "step": 125
901
+ },
902
+ {
903
+ "epoch": 1.2538071065989849,
904
+ "grad_norm": 0.17271803319454193,
905
+ "learning_rate": 6.21221114389424e-05,
906
+ "loss": 1.2333,
907
+ "step": 126
908
+ },
909
+ {
910
+ "epoch": 1.263959390862944,
911
+ "grad_norm": 0.15987545251846313,
912
+ "learning_rate": 6.0564414488668165e-05,
913
+ "loss": 1.238,
914
+ "step": 127
915
+ },
916
+ {
917
+ "epoch": 1.2741116751269035,
918
+ "grad_norm": 0.16485555469989777,
919
+ "learning_rate": 5.901796752034128e-05,
920
+ "loss": 1.2471,
921
+ "step": 128
922
+ },
923
+ {
924
+ "epoch": 1.284263959390863,
925
+ "grad_norm": 0.18228358030319214,
926
+ "learning_rate": 5.748321169643596e-05,
927
+ "loss": 1.1761,
928
+ "step": 129
929
+ },
930
+ {
931
+ "epoch": 1.2944162436548223,
932
+ "grad_norm": 0.1641974151134491,
933
+ "learning_rate": 5.596058484423656e-05,
934
+ "loss": 1.2469,
935
+ "step": 130
936
+ },
937
+ {
938
+ "epoch": 1.3045685279187818,
939
+ "grad_norm": 0.20411786437034607,
940
+ "learning_rate": 5.44505213309366e-05,
941
+ "loss": 1.212,
942
+ "step": 131
943
+ },
944
+ {
945
+ "epoch": 1.3147208121827412,
946
+ "grad_norm": 0.16920053958892822,
947
+ "learning_rate": 5.2953451939724454e-05,
948
+ "loss": 1.254,
949
+ "step": 132
950
+ },
951
+ {
952
+ "epoch": 1.3248730964467006,
953
+ "grad_norm": 0.19527798891067505,
954
+ "learning_rate": 5.146980374689192e-05,
955
+ "loss": 1.2187,
956
+ "step": 133
957
+ },
958
+ {
959
+ "epoch": 1.3350253807106598,
960
+ "grad_norm": 0.19046878814697266,
961
+ "learning_rate": 5.000000000000002e-05,
962
+ "loss": 1.2035,
963
+ "step": 134
964
+ },
965
+ {
966
+ "epoch": 1.3451776649746192,
967
+ "grad_norm": 0.1827058643102646,
968
+ "learning_rate": 4.854445999713715e-05,
969
+ "loss": 1.1891,
970
+ "step": 135
971
+ },
972
+ {
973
+ "epoch": 1.3553299492385786,
974
+ "grad_norm": 0.16475000977516174,
975
+ "learning_rate": 4.710359896730379e-05,
976
+ "loss": 1.2263,
977
+ "step": 136
978
+ },
979
+ {
980
+ "epoch": 1.365482233502538,
981
+ "grad_norm": 0.15977239608764648,
982
+ "learning_rate": 4.567782795195816e-05,
983
+ "loss": 1.2006,
984
+ "step": 137
985
+ },
986
+ {
987
+ "epoch": 1.3756345177664975,
988
+ "grad_norm": 0.16366241872310638,
989
+ "learning_rate": 4.426755368775637e-05,
990
+ "loss": 1.1572,
991
+ "step": 138
992
+ },
993
+ {
994
+ "epoch": 1.385786802030457,
995
+ "grad_norm": 0.16748002171516418,
996
+ "learning_rate": 4.287317849052075e-05,
997
+ "loss": 1.1932,
998
+ "step": 139
999
+ },
1000
+ {
1001
+ "epoch": 1.3959390862944163,
1002
+ "grad_norm": 0.17944949865341187,
1003
+ "learning_rate": 4.149510014046922e-05,
1004
+ "loss": 1.1635,
1005
+ "step": 140
1006
+ },
1007
+ {
1008
+ "epoch": 1.4060913705583755,
1009
+ "grad_norm": 0.15999887883663177,
1010
+ "learning_rate": 4.013371176873849e-05,
1011
+ "loss": 1.1987,
1012
+ "step": 141
1013
+ },
1014
+ {
1015
+ "epoch": 1.4162436548223352,
1016
+ "grad_norm": 0.17952662706375122,
1017
+ "learning_rate": 3.878940174523371e-05,
1018
+ "loss": 1.2722,
1019
+ "step": 142
1020
+ },
1021
+ {
1022
+ "epoch": 1.4263959390862944,
1023
+ "grad_norm": 0.16714362800121307,
1024
+ "learning_rate": 3.746255356783632e-05,
1025
+ "loss": 1.2027,
1026
+ "step": 143
1027
+ },
1028
+ {
1029
+ "epoch": 1.4365482233502538,
1030
+ "grad_norm": 0.21137690544128418,
1031
+ "learning_rate": 3.615354575300166e-05,
1032
+ "loss": 1.2099,
1033
+ "step": 144
1034
+ },
1035
+ {
1036
+ "epoch": 1.4467005076142132,
1037
+ "grad_norm": 0.16340382397174835,
1038
+ "learning_rate": 3.4862751727777797e-05,
1039
+ "loss": 1.2476,
1040
+ "step": 145
1041
+ },
1042
+ {
1043
+ "epoch": 1.4568527918781726,
1044
+ "grad_norm": 0.33795541524887085,
1045
+ "learning_rate": 3.3590539723276083e-05,
1046
+ "loss": 1.1955,
1047
+ "step": 146
1048
+ },
1049
+ {
1050
+ "epoch": 1.467005076142132,
1051
+ "grad_norm": 0.1949048787355423,
1052
+ "learning_rate": 3.233727266962425e-05,
1053
+ "loss": 1.2588,
1054
+ "step": 147
1055
+ },
1056
+ {
1057
+ "epoch": 1.4771573604060912,
1058
+ "grad_norm": 0.15895332396030426,
1059
+ "learning_rate": 3.110330809243134e-05,
1060
+ "loss": 1.1895,
1061
+ "step": 148
1062
+ },
1063
+ {
1064
+ "epoch": 1.487309644670051,
1065
+ "grad_norm": 0.17805150151252747,
1066
+ "learning_rate": 2.9888998010794743e-05,
1067
+ "loss": 1.2412,
1068
+ "step": 149
1069
+ },
1070
+ {
1071
+ "epoch": 1.49746192893401,
1072
+ "grad_norm": 0.16068041324615479,
1073
+ "learning_rate": 2.869468883687798e-05,
1074
+ "loss": 1.1935,
1075
+ "step": 150
1076
+ },
1077
+ {
1078
+ "epoch": 1.5076142131979695,
1079
+ "grad_norm": 0.16954682767391205,
1080
+ "learning_rate": 2.7520721277088024e-05,
1081
+ "loss": 1.2139,
1082
+ "step": 151
1083
+ },
1084
+ {
1085
+ "epoch": 1.517766497461929,
1086
+ "grad_norm": 0.17811493575572968,
1087
+ "learning_rate": 2.6367430234880284e-05,
1088
+ "loss": 1.2791,
1089
+ "step": 152
1090
+ },
1091
+ {
1092
+ "epoch": 1.5279187817258884,
1093
+ "grad_norm": 0.1642419546842575,
1094
+ "learning_rate": 2.523514471521913e-05,
1095
+ "loss": 1.2178,
1096
+ "step": 153
1097
+ },
1098
+ {
1099
+ "epoch": 1.5380710659898478,
1100
+ "grad_norm": 0.16778188943862915,
1101
+ "learning_rate": 2.4124187730720917e-05,
1102
+ "loss": 1.2735,
1103
+ "step": 154
1104
+ },
1105
+ {
1106
+ "epoch": 1.548223350253807,
1107
+ "grad_norm": 0.17863012850284576,
1108
+ "learning_rate": 2.3034876209506772e-05,
1109
+ "loss": 1.1632,
1110
+ "step": 155
1111
+ },
1112
+ {
1113
+ "epoch": 1.5583756345177666,
1114
+ "grad_norm": 0.15400968492031097,
1115
+ "learning_rate": 2.1967520904790827e-05,
1116
+ "loss": 1.2465,
1117
+ "step": 156
1118
+ },
1119
+ {
1120
+ "epoch": 1.5685279187817258,
1121
+ "grad_norm": 0.1574324369430542,
1122
+ "learning_rate": 2.092242630623016e-05,
1123
+ "loss": 1.2346,
1124
+ "step": 157
1125
+ },
1126
+ {
1127
+ "epoch": 1.5786802030456852,
1128
+ "grad_norm": 0.15566711127758026,
1129
+ "learning_rate": 1.9899890553061562e-05,
1130
+ "loss": 1.188,
1131
+ "step": 158
1132
+ },
1133
+ {
1134
+ "epoch": 1.5888324873096447,
1135
+ "grad_norm": 0.1699032336473465,
1136
+ "learning_rate": 1.8900205349049904e-05,
1137
+ "loss": 1.2062,
1138
+ "step": 159
1139
+ },
1140
+ {
1141
+ "epoch": 1.598984771573604,
1142
+ "grad_norm": 0.20871774852275848,
1143
+ "learning_rate": 1.7923655879272393e-05,
1144
+ "loss": 1.2349,
1145
+ "step": 160
1146
+ },
1147
+ {
1148
+ "epoch": 1.6091370558375635,
1149
+ "grad_norm": 0.19627781212329865,
1150
+ "learning_rate": 1.6970520728762375e-05,
1151
+ "loss": 1.193,
1152
+ "step": 161
1153
+ },
1154
+ {
1155
+ "epoch": 1.6192893401015227,
1156
+ "grad_norm": 0.1803133487701416,
1157
+ "learning_rate": 1.60410718030361e-05,
1158
+ "loss": 1.28,
1159
+ "step": 162
1160
+ },
1161
+ {
1162
+ "epoch": 1.6294416243654823,
1163
+ "grad_norm": 0.17840127646923065,
1164
+ "learning_rate": 1.5135574250524897e-05,
1165
+ "loss": 1.285,
1166
+ "step": 163
1167
+ },
1168
+ {
1169
+ "epoch": 1.6395939086294415,
1170
+ "grad_norm": 0.1523265242576599,
1171
+ "learning_rate": 1.425428638693489e-05,
1172
+ "loss": 1.2491,
1173
+ "step": 164
1174
+ },
1175
+ {
1176
+ "epoch": 1.649746192893401,
1177
+ "grad_norm": 0.17296157777309418,
1178
+ "learning_rate": 1.339745962155613e-05,
1179
+ "loss": 1.231,
1180
+ "step": 165
1181
+ },
1182
+ {
1183
+ "epoch": 1.6598984771573604,
1184
+ "grad_norm": 0.17164817452430725,
1185
+ "learning_rate": 1.2565338385541792e-05,
1186
+ "loss": 1.2217,
1187
+ "step": 166
1188
+ },
1189
+ {
1190
+ "epoch": 1.6700507614213198,
1191
+ "grad_norm": 0.17271456122398376,
1192
+ "learning_rate": 1.1758160062178093e-05,
1193
+ "loss": 1.285,
1194
+ "step": 167
1195
+ },
1196
+ {
1197
+ "epoch": 1.6802030456852792,
1198
+ "grad_norm": 0.16020996868610382,
1199
+ "learning_rate": 1.097615491916485e-05,
1200
+ "loss": 1.1259,
1201
+ "step": 168
1202
+ },
1203
+ {
1204
+ "epoch": 1.6903553299492384,
1205
+ "grad_norm": 0.17722178995609283,
1206
+ "learning_rate": 1.0219546042925843e-05,
1207
+ "loss": 1.2601,
1208
+ "step": 169
1209
+ },
1210
+ {
1211
+ "epoch": 1.700507614213198,
1212
+ "grad_norm": 0.1641930788755417,
1213
+ "learning_rate": 9.488549274967872e-06,
1214
+ "loss": 1.1552,
1215
+ "step": 170
1216
+ },
1217
+ {
1218
+ "epoch": 1.7106598984771573,
1219
+ "grad_norm": 0.17400699853897095,
1220
+ "learning_rate": 8.783373150306661e-06,
1221
+ "loss": 1.2226,
1222
+ "step": 171
1223
+ },
1224
+ {
1225
+ "epoch": 1.720812182741117,
1226
+ "grad_norm": 0.16899757087230682,
1227
+ "learning_rate": 8.10421883797694e-06,
1228
+ "loss": 1.3269,
1229
+ "step": 172
1230
+ },
1231
+ {
1232
+ "epoch": 1.7309644670050761,
1233
+ "grad_norm": 0.20650531351566315,
1234
+ "learning_rate": 7.4512800836440525e-06,
1235
+ "loss": 1.1627,
1236
+ "step": 173
1237
+ },
1238
+ {
1239
+ "epoch": 1.7411167512690355,
1240
+ "grad_norm": 0.15405453741550446,
1241
+ "learning_rate": 6.824743154333157e-06,
1242
+ "loss": 1.1767,
1243
+ "step": 174
1244
+ },
1245
+ {
1246
+ "epoch": 1.751269035532995,
1247
+ "grad_norm": 0.16714583337306976,
1248
+ "learning_rate": 6.22478678529197e-06,
1249
+ "loss": 1.2332,
1250
+ "step": 175
1251
+ },
1252
+ {
1253
+ "epoch": 1.7614213197969542,
1254
+ "grad_norm": 0.16613709926605225,
1255
+ "learning_rate": 5.651582129001986e-06,
1256
+ "loss": 1.2306,
1257
+ "step": 176
1258
+ },
1259
+ {
1260
+ "epoch": 1.7715736040609138,
1261
+ "grad_norm": 0.22227272391319275,
1262
+ "learning_rate": 5.105292706353093e-06,
1263
+ "loss": 1.1969,
1264
+ "step": 177
1265
+ },
1266
+ {
1267
+ "epoch": 1.781725888324873,
1268
+ "grad_norm": 0.18371812999248505,
1269
+ "learning_rate": 4.586074359995119e-06,
1270
+ "loss": 1.2643,
1271
+ "step": 178
1272
+ },
1273
+ {
1274
+ "epoch": 1.7918781725888326,
1275
+ "grad_norm": 0.1571132242679596,
1276
+ "learning_rate": 4.094075209879788e-06,
1277
+ "loss": 1.208,
1278
+ "step": 179
1279
+ },
1280
+ {
1281
+ "epoch": 1.8020304568527918,
1282
+ "grad_norm": 0.16985946893692017,
1283
+ "learning_rate": 3.6294356110059157e-06,
1284
+ "loss": 1.2069,
1285
+ "step": 180
1286
+ },
1287
+ {
1288
+ "epoch": 1.8121827411167513,
1289
+ "grad_norm": 0.16651304066181183,
1290
+ "learning_rate": 3.1922881133795825e-06,
1291
+ "loss": 1.2017,
1292
+ "step": 181
1293
+ },
1294
+ {
1295
+ "epoch": 1.8223350253807107,
1296
+ "grad_norm": 0.16982710361480713,
1297
+ "learning_rate": 2.7827574242009437e-06,
1298
+ "loss": 1.1706,
1299
+ "step": 182
1300
+ },
1301
+ {
1302
+ "epoch": 1.83248730964467,
1303
+ "grad_norm": 0.16492588818073273,
1304
+ "learning_rate": 2.4009603722884742e-06,
1305
+ "loss": 1.2733,
1306
+ "step": 183
1307
+ },
1308
+ {
1309
+ "epoch": 1.8426395939086295,
1310
+ "grad_norm": 0.165365532040596,
1311
+ "learning_rate": 2.0470058747505516e-06,
1312
+ "loss": 1.1843,
1313
+ "step": 184
1314
+ },
1315
+ {
1316
+ "epoch": 1.8527918781725887,
1317
+ "grad_norm": 0.15770269930362701,
1318
+ "learning_rate": 1.7209949059142083e-06,
1319
+ "loss": 1.2168,
1320
+ "step": 185
1321
+ },
1322
+ {
1323
+ "epoch": 1.8629441624365484,
1324
+ "grad_norm": 0.16646115481853485,
1325
+ "learning_rate": 1.4230204685196203e-06,
1326
+ "loss": 1.1774,
1327
+ "step": 186
1328
+ },
1329
+ {
1330
+ "epoch": 1.8730964467005076,
1331
+ "grad_norm": 0.17410410940647125,
1332
+ "learning_rate": 1.1531675671888619e-06,
1333
+ "loss": 1.2268,
1334
+ "step": 187
1335
+ },
1336
+ {
1337
+ "epoch": 1.883248730964467,
1338
+ "grad_norm": 0.17677107453346252,
1339
+ "learning_rate": 9.11513184176116e-07,
1340
+ "loss": 1.1485,
1341
+ "step": 188
1342
+ },
1343
+ {
1344
+ "epoch": 1.8934010152284264,
1345
+ "grad_norm": 0.15033933520317078,
1346
+ "learning_rate": 6.981262574066394e-07,
1347
+ "loss": 1.2125,
1348
+ "step": 189
1349
+ },
1350
+ {
1351
+ "epoch": 1.9035532994923858,
1352
+ "grad_norm": 0.1645897626876831,
1353
+ "learning_rate": 5.130676608104845e-07,
1354
+ "loss": 1.2151,
1355
+ "step": 190
1356
+ },
1357
+ {
1358
+ "epoch": 1.9137055837563453,
1359
+ "grad_norm": 0.17260093986988068,
1360
+ "learning_rate": 3.56390186956701e-07,
1361
+ "loss": 1.2509,
1362
+ "step": 191
1363
+ },
1364
+ {
1365
+ "epoch": 1.9238578680203045,
1366
+ "grad_norm": 0.16453680396080017,
1367
+ "learning_rate": 2.2813853199292746e-07,
1368
+ "loss": 1.264,
1369
+ "step": 192
1370
+ },
1371
+ {
1372
+ "epoch": 1.934010152284264,
1373
+ "grad_norm": 0.1580802947282791,
1374
+ "learning_rate": 1.2834928289472416e-07,
1375
+ "loss": 1.193,
1376
+ "step": 193
1377
+ },
1378
+ {
1379
+ "epoch": 1.9441624365482233,
1380
+ "grad_norm": 0.18191225826740265,
1381
+ "learning_rate": 5.705090702819993e-08,
1382
+ "loss": 1.2404,
1383
+ "step": 194
1384
+ },
1385
+ {
1386
+ "epoch": 1.9543147208121827,
1387
+ "grad_norm": 0.19886773824691772,
1388
+ "learning_rate": 1.426374402901942e-08,
1389
+ "loss": 1.1602,
1390
+ "step": 195
1391
+ },
1392
+ {
1393
+ "epoch": 1.9644670050761421,
1394
+ "grad_norm": 0.19592063128948212,
1395
+ "learning_rate": 0.0,
1396
+ "loss": 1.1879,
1397
+ "step": 196
1398
+ },
1399
+ {
1400
+ "epoch": 1.9644670050761421,
1401
+ "eval_loss": 1.2216618061065674,
1402
+ "eval_runtime": 58.8162,
1403
+ "eval_samples_per_second": 10.031,
1404
+ "eval_steps_per_second": 1.258,
1405
+ "step": 196
1406
+ }
1407
+ ],
1408
+ "logging_steps": 1,
1409
+ "max_steps": 196,
1410
+ "num_input_tokens_seen": 0,
1411
+ "num_train_epochs": 2,
1412
+ "save_steps": 98,
1413
+ "stateful_callbacks": {
1414
+ "TrainerControl": {
1415
+ "args": {
1416
+ "should_epoch_stop": false,
1417
+ "should_evaluate": false,
1418
+ "should_log": false,
1419
+ "should_save": true,
1420
+ "should_training_stop": true
1421
+ },
1422
+ "attributes": {}
1423
+ }
1424
+ },
1425
+ "total_flos": 1.0307946378260644e+18,
1426
+ "train_batch_size": 2,
1427
+ "trial_name": null,
1428
+ "trial_params": null
1429
+ }
checkpoint-196/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ebabd7fe1a216aadf93e2b9106041cfaee52bf0b7322ca74f94d144ee0af1908
3
+ size 6200
checkpoint-98/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ base_model: NousResearch/CodeLlama-7b-hf
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.11.1
checkpoint-98/adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "NousResearch/CodeLlama-7b-hf",
5
+ "bias": "none",
6
+ "fan_in_fan_out": null,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0.05,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 32,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "gate_proj",
24
+ "v_proj",
25
+ "o_proj",
26
+ "k_proj",
27
+ "up_proj",
28
+ "down_proj",
29
+ "q_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "use_dora": false,
33
+ "use_rslora": false
34
+ }
checkpoint-98/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7623e73aef3de3718899aa3aa183d3f484023dc9aba3a976dc48b74150154f1f
3
+ size 319876032
checkpoint-98/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f41e42e7d1fd08636825f491201a9c3b6ad7064668c2ba27fca2af9a41b5e215
3
+ size 160736084
checkpoint-98/rng_state_0.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31480aca1cfcb18a53878f9e571c2f7e052a77342f7129586689d6757dc10495
3
+ size 14960
checkpoint-98/rng_state_1.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78340106548d9645ba07ebe8b0c7d7b24803ee6b95373c25a757b5a3e8f2b593
3
+ size 14960
checkpoint-98/rng_state_2.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe05cbbb9360e58c7bc54a5c9641cc225f2b526ad6ead9ce2168689bbbae65b7
3
+ size 14960
checkpoint-98/rng_state_3.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1746434fcc316cc4a0038fc698d124320b6b3f0dc2aced172b66fb465b4e103
3
+ size 14960
checkpoint-98/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e31465eabc96d2c0b0dc68386782c8ea3a5771edcba13d0d620c4297cd31957
3
+ size 1064
checkpoint-98/special_tokens_map.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "▁<PRE>",
4
+ "▁<MID>",
5
+ "▁<SUF>",
6
+ "▁<EOT>"
7
+ ],
8
+ "bos_token": {
9
+ "content": "<s>",
10
+ "lstrip": false,
11
+ "normalized": false,
12
+ "rstrip": false,
13
+ "single_word": false
14
+ },
15
+ "eos_token": {
16
+ "content": "</s>",
17
+ "lstrip": false,
18
+ "normalized": false,
19
+ "rstrip": false,
20
+ "single_word": false
21
+ },
22
+ "pad_token": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false
28
+ },
29
+ "unk_token": {
30
+ "content": "<unk>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false
35
+ }
36
+ }
checkpoint-98/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45ccb9c8b6b561889acea59191d66986d314e7cbd6a78abc6e49b139ca91c1e6
3
+ size 500058
checkpoint-98/tokenizer_config.json ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "32007": {
30
+ "content": "▁<PRE>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "32008": {
38
+ "content": "▁<SUF>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "32009": {
46
+ "content": "▁<MID>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "32010": {
54
+ "content": "▁<EOT>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ }
61
+ },
62
+ "additional_special_tokens": [
63
+ "▁<PRE>",
64
+ "▁<MID>",
65
+ "▁<SUF>",
66
+ "▁<EOT>"
67
+ ],
68
+ "bos_token": "<s>",
69
+ "clean_up_tokenization_spaces": false,
70
+ "eos_token": "</s>",
71
+ "eot_token": "▁<EOT>",
72
+ "fill_token": "<FILL_ME>",
73
+ "legacy": null,
74
+ "middle_token": "▁<MID>",
75
+ "model_max_length": 1000000000000000019884624838656,
76
+ "pad_token": "</s>",
77
+ "prefix_token": "▁<PRE>",
78
+ "sp_model_kwargs": {},
79
+ "suffix_first": false,
80
+ "suffix_token": "▁<SUF>",
81
+ "tokenizer_class": "CodeLlamaTokenizer",
82
+ "unk_token": "<unk>",
83
+ "use_default_system_prompt": false,
84
+ "use_fast": true
85
+ }
checkpoint-98/trainer_state.json ADDED
@@ -0,0 +1,735 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.9949238578680203,
5
+ "eval_steps": 98,
6
+ "global_step": 98,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.01015228426395939,
13
+ "grad_norm": 0.516016960144043,
14
+ "learning_rate": 2e-05,
15
+ "loss": 4.3699,
16
+ "step": 1
17
+ },
18
+ {
19
+ "epoch": 0.01015228426395939,
20
+ "eval_loss": 4.561354637145996,
21
+ "eval_runtime": 58.1653,
22
+ "eval_samples_per_second": 10.144,
23
+ "eval_steps_per_second": 1.272,
24
+ "step": 1
25
+ },
26
+ {
27
+ "epoch": 0.02030456852791878,
28
+ "grad_norm": 0.5144633650779724,
29
+ "learning_rate": 4e-05,
30
+ "loss": 4.5708,
31
+ "step": 2
32
+ },
33
+ {
34
+ "epoch": 0.030456852791878174,
35
+ "grad_norm": 0.5246109962463379,
36
+ "learning_rate": 6e-05,
37
+ "loss": 4.5871,
38
+ "step": 3
39
+ },
40
+ {
41
+ "epoch": 0.04060913705583756,
42
+ "grad_norm": 0.5753014087677002,
43
+ "learning_rate": 8e-05,
44
+ "loss": 4.3818,
45
+ "step": 4
46
+ },
47
+ {
48
+ "epoch": 0.050761421319796954,
49
+ "grad_norm": 0.6615588665008545,
50
+ "learning_rate": 0.0001,
51
+ "loss": 4.448,
52
+ "step": 5
53
+ },
54
+ {
55
+ "epoch": 0.06091370558375635,
56
+ "grad_norm": 0.751004159450531,
57
+ "learning_rate": 0.00012,
58
+ "loss": 4.4964,
59
+ "step": 6
60
+ },
61
+ {
62
+ "epoch": 0.07106598984771574,
63
+ "grad_norm": 0.886650562286377,
64
+ "learning_rate": 0.00014,
65
+ "loss": 4.4011,
66
+ "step": 7
67
+ },
68
+ {
69
+ "epoch": 0.08121827411167512,
70
+ "grad_norm": 1.1175657510757446,
71
+ "learning_rate": 0.00016,
72
+ "loss": 3.9849,
73
+ "step": 8
74
+ },
75
+ {
76
+ "epoch": 0.09137055837563451,
77
+ "grad_norm": 0.9861809015274048,
78
+ "learning_rate": 0.00018,
79
+ "loss": 3.8218,
80
+ "step": 9
81
+ },
82
+ {
83
+ "epoch": 0.10152284263959391,
84
+ "grad_norm": 1.1107577085494995,
85
+ "learning_rate": 0.0002,
86
+ "loss": 3.6451,
87
+ "step": 10
88
+ },
89
+ {
90
+ "epoch": 0.1116751269035533,
91
+ "grad_norm": 0.9435870051383972,
92
+ "learning_rate": 0.000199985736255971,
93
+ "loss": 3.5619,
94
+ "step": 11
95
+ },
96
+ {
97
+ "epoch": 0.1218274111675127,
98
+ "grad_norm": 0.9529088139533997,
99
+ "learning_rate": 0.0001999429490929718,
100
+ "loss": 3.4561,
101
+ "step": 12
102
+ },
103
+ {
104
+ "epoch": 0.1319796954314721,
105
+ "grad_norm": 1.3805211782455444,
106
+ "learning_rate": 0.00019987165071710527,
107
+ "loss": 3.2067,
108
+ "step": 13
109
+ },
110
+ {
111
+ "epoch": 0.14213197969543148,
112
+ "grad_norm": 1.319393515586853,
113
+ "learning_rate": 0.00019977186146800707,
114
+ "loss": 3.0656,
115
+ "step": 14
116
+ },
117
+ {
118
+ "epoch": 0.15228426395939088,
119
+ "grad_norm": 1.061409592628479,
120
+ "learning_rate": 0.0001996436098130433,
121
+ "loss": 2.7711,
122
+ "step": 15
123
+ },
124
+ {
125
+ "epoch": 0.16243654822335024,
126
+ "grad_norm": 1.036845326423645,
127
+ "learning_rate": 0.00019948693233918952,
128
+ "loss": 2.6576,
129
+ "step": 16
130
+ },
131
+ {
132
+ "epoch": 0.17258883248730963,
133
+ "grad_norm": 1.0924557447433472,
134
+ "learning_rate": 0.00019930187374259337,
135
+ "loss": 2.4101,
136
+ "step": 17
137
+ },
138
+ {
139
+ "epoch": 0.18274111675126903,
140
+ "grad_norm": 1.0557212829589844,
141
+ "learning_rate": 0.00019908848681582391,
142
+ "loss": 2.2991,
143
+ "step": 18
144
+ },
145
+ {
146
+ "epoch": 0.19289340101522842,
147
+ "grad_norm": 1.1735273599624634,
148
+ "learning_rate": 0.00019884683243281116,
149
+ "loss": 2.3991,
150
+ "step": 19
151
+ },
152
+ {
153
+ "epoch": 0.20304568527918782,
154
+ "grad_norm": 0.8104203343391418,
155
+ "learning_rate": 0.00019857697953148037,
156
+ "loss": 2.107,
157
+ "step": 20
158
+ },
159
+ {
160
+ "epoch": 0.2131979695431472,
161
+ "grad_norm": 0.7275764346122742,
162
+ "learning_rate": 0.00019827900509408581,
163
+ "loss": 2.0596,
164
+ "step": 21
165
+ },
166
+ {
167
+ "epoch": 0.2233502538071066,
168
+ "grad_norm": 1.0672590732574463,
169
+ "learning_rate": 0.00019795299412524945,
170
+ "loss": 1.9543,
171
+ "step": 22
172
+ },
173
+ {
174
+ "epoch": 0.233502538071066,
175
+ "grad_norm": 0.5848283767700195,
176
+ "learning_rate": 0.00019759903962771156,
177
+ "loss": 1.8091,
178
+ "step": 23
179
+ },
180
+ {
181
+ "epoch": 0.2436548223350254,
182
+ "grad_norm": 0.9580035209655762,
183
+ "learning_rate": 0.00019721724257579907,
184
+ "loss": 1.7178,
185
+ "step": 24
186
+ },
187
+ {
188
+ "epoch": 0.25380710659898476,
189
+ "grad_norm": 0.5362741351127625,
190
+ "learning_rate": 0.00019680771188662044,
191
+ "loss": 1.6739,
192
+ "step": 25
193
+ },
194
+ {
195
+ "epoch": 0.2639593908629442,
196
+ "grad_norm": 0.5108774304389954,
197
+ "learning_rate": 0.0001963705643889941,
198
+ "loss": 1.7575,
199
+ "step": 26
200
+ },
201
+ {
202
+ "epoch": 0.27411167512690354,
203
+ "grad_norm": 0.5604164004325867,
204
+ "learning_rate": 0.00019590592479012023,
205
+ "loss": 1.6815,
206
+ "step": 27
207
+ },
208
+ {
209
+ "epoch": 0.28426395939086296,
210
+ "grad_norm": 0.7223322987556458,
211
+ "learning_rate": 0.00019541392564000488,
212
+ "loss": 1.6213,
213
+ "step": 28
214
+ },
215
+ {
216
+ "epoch": 0.29441624365482233,
217
+ "grad_norm": 0.5081471800804138,
218
+ "learning_rate": 0.00019489470729364692,
219
+ "loss": 1.5935,
220
+ "step": 29
221
+ },
222
+ {
223
+ "epoch": 0.30456852791878175,
224
+ "grad_norm": 0.5000993013381958,
225
+ "learning_rate": 0.00019434841787099803,
226
+ "loss": 1.6237,
227
+ "step": 30
228
+ },
229
+ {
230
+ "epoch": 0.3147208121827411,
231
+ "grad_norm": 0.45925211906433105,
232
+ "learning_rate": 0.00019377521321470805,
233
+ "loss": 1.6201,
234
+ "step": 31
235
+ },
236
+ {
237
+ "epoch": 0.3248730964467005,
238
+ "grad_norm": 0.38572826981544495,
239
+ "learning_rate": 0.00019317525684566685,
240
+ "loss": 1.4805,
241
+ "step": 32
242
+ },
243
+ {
244
+ "epoch": 0.3350253807106599,
245
+ "grad_norm": 0.28524091839790344,
246
+ "learning_rate": 0.00019254871991635598,
247
+ "loss": 1.5985,
248
+ "step": 33
249
+ },
250
+ {
251
+ "epoch": 0.34517766497461927,
252
+ "grad_norm": 0.3277890980243683,
253
+ "learning_rate": 0.00019189578116202307,
254
+ "loss": 1.4994,
255
+ "step": 34
256
+ },
257
+ {
258
+ "epoch": 0.3553299492385787,
259
+ "grad_norm": 0.3320370018482208,
260
+ "learning_rate": 0.00019121662684969335,
261
+ "loss": 1.5039,
262
+ "step": 35
263
+ },
264
+ {
265
+ "epoch": 0.36548223350253806,
266
+ "grad_norm": 0.2798719108104706,
267
+ "learning_rate": 0.00019051145072503215,
268
+ "loss": 1.4997,
269
+ "step": 36
270
+ },
271
+ {
272
+ "epoch": 0.3756345177664975,
273
+ "grad_norm": 1.7497050762176514,
274
+ "learning_rate": 0.00018978045395707418,
275
+ "loss": 1.5465,
276
+ "step": 37
277
+ },
278
+ {
279
+ "epoch": 0.38578680203045684,
280
+ "grad_norm": 0.27379170060157776,
281
+ "learning_rate": 0.00018902384508083517,
282
+ "loss": 1.4846,
283
+ "step": 38
284
+ },
285
+ {
286
+ "epoch": 0.39593908629441626,
287
+ "grad_norm": 0.36681699752807617,
288
+ "learning_rate": 0.00018824183993782192,
289
+ "loss": 1.4154,
290
+ "step": 39
291
+ },
292
+ {
293
+ "epoch": 0.40609137055837563,
294
+ "grad_norm": 0.45136329531669617,
295
+ "learning_rate": 0.00018743466161445823,
296
+ "loss": 1.3928,
297
+ "step": 40
298
+ },
299
+ {
300
+ "epoch": 0.41624365482233505,
301
+ "grad_norm": 0.27879664301872253,
302
+ "learning_rate": 0.00018660254037844388,
303
+ "loss": 1.5119,
304
+ "step": 41
305
+ },
306
+ {
307
+ "epoch": 0.4263959390862944,
308
+ "grad_norm": 0.29230332374572754,
309
+ "learning_rate": 0.0001857457136130651,
310
+ "loss": 1.5095,
311
+ "step": 42
312
+ },
313
+ {
314
+ "epoch": 0.4365482233502538,
315
+ "grad_norm": 0.2731008231639862,
316
+ "learning_rate": 0.00018486442574947511,
317
+ "loss": 1.4672,
318
+ "step": 43
319
+ },
320
+ {
321
+ "epoch": 0.4467005076142132,
322
+ "grad_norm": 0.23685932159423828,
323
+ "learning_rate": 0.00018395892819696389,
324
+ "loss": 1.4173,
325
+ "step": 44
326
+ },
327
+ {
328
+ "epoch": 0.45685279187817257,
329
+ "grad_norm": 0.2703058421611786,
330
+ "learning_rate": 0.00018302947927123766,
331
+ "loss": 1.4088,
332
+ "step": 45
333
+ },
334
+ {
335
+ "epoch": 0.467005076142132,
336
+ "grad_norm": 1.65743887424469,
337
+ "learning_rate": 0.00018207634412072764,
338
+ "loss": 1.4672,
339
+ "step": 46
340
+ },
341
+ {
342
+ "epoch": 0.47715736040609136,
343
+ "grad_norm": 0.21287347376346588,
344
+ "learning_rate": 0.00018109979465095013,
345
+ "loss": 1.3975,
346
+ "step": 47
347
+ },
348
+ {
349
+ "epoch": 0.4873096446700508,
350
+ "grad_norm": 0.3460160791873932,
351
+ "learning_rate": 0.00018010010944693848,
352
+ "loss": 1.4501,
353
+ "step": 48
354
+ },
355
+ {
356
+ "epoch": 0.49746192893401014,
357
+ "grad_norm": 0.4228818714618683,
358
+ "learning_rate": 0.00017907757369376985,
359
+ "loss": 1.4632,
360
+ "step": 49
361
+ },
362
+ {
363
+ "epoch": 0.5076142131979695,
364
+ "grad_norm": 0.46471402049064636,
365
+ "learning_rate": 0.0001780324790952092,
366
+ "loss": 1.3696,
367
+ "step": 50
368
+ },
369
+ {
370
+ "epoch": 0.5177664974619289,
371
+ "grad_norm": 0.35602033138275146,
372
+ "learning_rate": 0.00017696512379049325,
373
+ "loss": 1.4096,
374
+ "step": 51
375
+ },
376
+ {
377
+ "epoch": 0.5279187817258884,
378
+ "grad_norm": 0.2879682779312134,
379
+ "learning_rate": 0.0001758758122692791,
380
+ "loss": 1.337,
381
+ "step": 52
382
+ },
383
+ {
384
+ "epoch": 0.5380710659898477,
385
+ "grad_norm": 0.1947374939918518,
386
+ "learning_rate": 0.00017476485528478093,
387
+ "loss": 1.3815,
388
+ "step": 53
389
+ },
390
+ {
391
+ "epoch": 0.5482233502538071,
392
+ "grad_norm": 0.22819018363952637,
393
+ "learning_rate": 0.00017363256976511972,
394
+ "loss": 1.4021,
395
+ "step": 54
396
+ },
397
+ {
398
+ "epoch": 0.5583756345177665,
399
+ "grad_norm": 0.19164641201496124,
400
+ "learning_rate": 0.000172479278722912,
401
+ "loss": 1.3899,
402
+ "step": 55
403
+ },
404
+ {
405
+ "epoch": 0.5685279187817259,
406
+ "grad_norm": 0.5477288961410522,
407
+ "learning_rate": 0.00017130531116312203,
408
+ "loss": 1.4089,
409
+ "step": 56
410
+ },
411
+ {
412
+ "epoch": 0.5786802030456852,
413
+ "grad_norm": 0.6282036900520325,
414
+ "learning_rate": 0.0001701110019892053,
415
+ "loss": 1.3983,
416
+ "step": 57
417
+ },
418
+ {
419
+ "epoch": 0.5888324873096447,
420
+ "grad_norm": 0.5962779521942139,
421
+ "learning_rate": 0.00016889669190756868,
422
+ "loss": 1.3126,
423
+ "step": 58
424
+ },
425
+ {
426
+ "epoch": 0.5989847715736041,
427
+ "grad_norm": 0.39695534110069275,
428
+ "learning_rate": 0.00016766272733037576,
429
+ "loss": 1.3693,
430
+ "step": 59
431
+ },
432
+ {
433
+ "epoch": 0.6091370558375635,
434
+ "grad_norm": 0.2737330198287964,
435
+ "learning_rate": 0.00016640946027672392,
436
+ "loss": 1.4286,
437
+ "step": 60
438
+ },
439
+ {
440
+ "epoch": 0.6192893401015228,
441
+ "grad_norm": 0.34324145317077637,
442
+ "learning_rate": 0.00016513724827222227,
443
+ "loss": 1.3363,
444
+ "step": 61
445
+ },
446
+ {
447
+ "epoch": 0.6294416243654822,
448
+ "grad_norm": 0.4945085942745209,
449
+ "learning_rate": 0.00016384645424699835,
450
+ "loss": 1.4388,
451
+ "step": 62
452
+ },
453
+ {
454
+ "epoch": 0.6395939086294417,
455
+ "grad_norm": 0.3939533829689026,
456
+ "learning_rate": 0.00016253744643216368,
457
+ "loss": 1.325,
458
+ "step": 63
459
+ },
460
+ {
461
+ "epoch": 0.649746192893401,
462
+ "grad_norm": 0.3593675196170807,
463
+ "learning_rate": 0.0001612105982547663,
464
+ "loss": 1.353,
465
+ "step": 64
466
+ },
467
+ {
468
+ "epoch": 0.6598984771573604,
469
+ "grad_norm": 0.3457062244415283,
470
+ "learning_rate": 0.0001598662882312615,
471
+ "loss": 1.3119,
472
+ "step": 65
473
+ },
474
+ {
475
+ "epoch": 0.6700507614213198,
476
+ "grad_norm": 0.22607868909835815,
477
+ "learning_rate": 0.00015850489985953076,
478
+ "loss": 1.3281,
479
+ "step": 66
480
+ },
481
+ {
482
+ "epoch": 0.6802030456852792,
483
+ "grad_norm": 0.1937730461359024,
484
+ "learning_rate": 0.00015712682150947923,
485
+ "loss": 1.3061,
486
+ "step": 67
487
+ },
488
+ {
489
+ "epoch": 0.6903553299492385,
490
+ "grad_norm": 0.19334916770458221,
491
+ "learning_rate": 0.00015573244631224365,
492
+ "loss": 1.2995,
493
+ "step": 68
494
+ },
495
+ {
496
+ "epoch": 0.700507614213198,
497
+ "grad_norm": 0.43978920578956604,
498
+ "learning_rate": 0.0001543221720480419,
499
+ "loss": 1.3196,
500
+ "step": 69
501
+ },
502
+ {
503
+ "epoch": 0.7106598984771574,
504
+ "grad_norm": 0.20429864525794983,
505
+ "learning_rate": 0.00015289640103269625,
506
+ "loss": 1.3428,
507
+ "step": 70
508
+ },
509
+ {
510
+ "epoch": 0.7208121827411168,
511
+ "grad_norm": 0.2042793482542038,
512
+ "learning_rate": 0.0001514555400028629,
513
+ "loss": 1.2717,
514
+ "step": 71
515
+ },
516
+ {
517
+ "epoch": 0.7309644670050761,
518
+ "grad_norm": 0.2089298814535141,
519
+ "learning_rate": 0.00015000000000000001,
520
+ "loss": 1.2823,
521
+ "step": 72
522
+ },
523
+ {
524
+ "epoch": 0.7411167512690355,
525
+ "grad_norm": 0.29447218775749207,
526
+ "learning_rate": 0.00014853019625310813,
527
+ "loss": 1.2596,
528
+ "step": 73
529
+ },
530
+ {
531
+ "epoch": 0.751269035532995,
532
+ "grad_norm": 0.20766524970531464,
533
+ "learning_rate": 0.0001470465480602756,
534
+ "loss": 1.3421,
535
+ "step": 74
536
+ },
537
+ {
538
+ "epoch": 0.7614213197969543,
539
+ "grad_norm": 0.19240014255046844,
540
+ "learning_rate": 0.0001455494786690634,
541
+ "loss": 1.3871,
542
+ "step": 75
543
+ },
544
+ {
545
+ "epoch": 0.7715736040609137,
546
+ "grad_norm": 0.16677537560462952,
547
+ "learning_rate": 0.00014403941515576344,
548
+ "loss": 1.2507,
549
+ "step": 76
550
+ },
551
+ {
552
+ "epoch": 0.7817258883248731,
553
+ "grad_norm": 0.1933940052986145,
554
+ "learning_rate": 0.00014251678830356408,
555
+ "loss": 1.2792,
556
+ "step": 77
557
+ },
558
+ {
559
+ "epoch": 0.7918781725888325,
560
+ "grad_norm": 0.19050206243991852,
561
+ "learning_rate": 0.00014098203247965875,
562
+ "loss": 1.3017,
563
+ "step": 78
564
+ },
565
+ {
566
+ "epoch": 0.8020304568527918,
567
+ "grad_norm": 0.25748810172080994,
568
+ "learning_rate": 0.00013943558551133186,
569
+ "loss": 1.2879,
570
+ "step": 79
571
+ },
572
+ {
573
+ "epoch": 0.8121827411167513,
574
+ "grad_norm": 0.2314893752336502,
575
+ "learning_rate": 0.0001378778885610576,
576
+ "loss": 1.3692,
577
+ "step": 80
578
+ },
579
+ {
580
+ "epoch": 0.8223350253807107,
581
+ "grad_norm": 0.20771433413028717,
582
+ "learning_rate": 0.00013630938600064747,
583
+ "loss": 1.3268,
584
+ "step": 81
585
+ },
586
+ {
587
+ "epoch": 0.8324873096446701,
588
+ "grad_norm": 0.18968452513217926,
589
+ "learning_rate": 0.00013473052528448201,
590
+ "loss": 1.2663,
591
+ "step": 82
592
+ },
593
+ {
594
+ "epoch": 0.8426395939086294,
595
+ "grad_norm": 0.1978602409362793,
596
+ "learning_rate": 0.0001331417568218636,
597
+ "loss": 1.2968,
598
+ "step": 83
599
+ },
600
+ {
601
+ "epoch": 0.8527918781725888,
602
+ "grad_norm": 0.9941853284835815,
603
+ "learning_rate": 0.00013154353384852558,
604
+ "loss": 1.3187,
605
+ "step": 84
606
+ },
607
+ {
608
+ "epoch": 0.8629441624365483,
609
+ "grad_norm": 0.18706466257572174,
610
+ "learning_rate": 0.00012993631229733582,
611
+ "loss": 1.2808,
612
+ "step": 85
613
+ },
614
+ {
615
+ "epoch": 0.8730964467005076,
616
+ "grad_norm": 0.18098409473896027,
617
+ "learning_rate": 0.00012832055066823038,
618
+ "loss": 1.2246,
619
+ "step": 86
620
+ },
621
+ {
622
+ "epoch": 0.883248730964467,
623
+ "grad_norm": 0.22270160913467407,
624
+ "learning_rate": 0.00012669670989741517,
625
+ "loss": 1.3028,
626
+ "step": 87
627
+ },
628
+ {
629
+ "epoch": 0.8934010152284264,
630
+ "grad_norm": 0.25465860962867737,
631
+ "learning_rate": 0.00012506525322587207,
632
+ "loss": 1.347,
633
+ "step": 88
634
+ },
635
+ {
636
+ "epoch": 0.9035532994923858,
637
+ "grad_norm": 0.23076751828193665,
638
+ "learning_rate": 0.00012342664606720822,
639
+ "loss": 1.3099,
640
+ "step": 89
641
+ },
642
+ {
643
+ "epoch": 0.9137055837563451,
644
+ "grad_norm": 0.19831228256225586,
645
+ "learning_rate": 0.00012178135587488515,
646
+ "loss": 1.278,
647
+ "step": 90
648
+ },
649
+ {
650
+ "epoch": 0.9238578680203046,
651
+ "grad_norm": 0.22052858769893646,
652
+ "learning_rate": 0.00012012985200886602,
653
+ "loss": 1.2165,
654
+ "step": 91
655
+ },
656
+ {
657
+ "epoch": 0.934010152284264,
658
+ "grad_norm": 0.18730390071868896,
659
+ "learning_rate": 0.00011847260560171896,
660
+ "loss": 1.2814,
661
+ "step": 92
662
+ },
663
+ {
664
+ "epoch": 0.9441624365482234,
665
+ "grad_norm": 0.16983264684677124,
666
+ "learning_rate": 0.00011681008942421483,
667
+ "loss": 1.2235,
668
+ "step": 93
669
+ },
670
+ {
671
+ "epoch": 0.9543147208121827,
672
+ "grad_norm": 0.17806044220924377,
673
+ "learning_rate": 0.00011514277775045768,
674
+ "loss": 1.1867,
675
+ "step": 94
676
+ },
677
+ {
678
+ "epoch": 0.9644670050761421,
679
+ "grad_norm": 0.1574580818414688,
680
+ "learning_rate": 0.00011347114622258612,
681
+ "loss": 1.2718,
682
+ "step": 95
683
+ },
684
+ {
685
+ "epoch": 0.9746192893401016,
686
+ "grad_norm": 0.15895454585552216,
687
+ "learning_rate": 0.00011179567171508463,
688
+ "loss": 1.245,
689
+ "step": 96
690
+ },
691
+ {
692
+ "epoch": 0.9847715736040609,
693
+ "grad_norm": 0.22224721312522888,
694
+ "learning_rate": 0.00011011683219874323,
695
+ "loss": 1.2945,
696
+ "step": 97
697
+ },
698
+ {
699
+ "epoch": 0.9949238578680203,
700
+ "grad_norm": 0.16613103449344635,
701
+ "learning_rate": 0.00010843510660430447,
702
+ "loss": 1.3054,
703
+ "step": 98
704
+ },
705
+ {
706
+ "epoch": 0.9949238578680203,
707
+ "eval_loss": 1.249497413635254,
708
+ "eval_runtime": 58.5386,
709
+ "eval_samples_per_second": 10.079,
710
+ "eval_steps_per_second": 1.264,
711
+ "step": 98
712
+ }
713
+ ],
714
+ "logging_steps": 1,
715
+ "max_steps": 196,
716
+ "num_input_tokens_seen": 0,
717
+ "num_train_epochs": 2,
718
+ "save_steps": 98,
719
+ "stateful_callbacks": {
720
+ "TrainerControl": {
721
+ "args": {
722
+ "should_epoch_stop": false,
723
+ "should_evaluate": false,
724
+ "should_log": false,
725
+ "should_save": true,
726
+ "should_training_stop": false
727
+ },
728
+ "attributes": {}
729
+ }
730
+ },
731
+ "total_flos": 5.153973189130322e+17,
732
+ "train_batch_size": 2,
733
+ "trial_name": null,
734
+ "trial_params": null
735
+ }
checkpoint-98/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ebabd7fe1a216aadf93e2b9106041cfaee52bf0b7322ca74f94d144ee0af1908
3
+ size 6200