ErrorAI commited on
Commit
5cc7c78
·
verified ·
1 Parent(s): 7cbef4c

Training in progress, step 348, checkpoint

Browse files
last-checkpoint/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: unsloth/OpenHermes-2.5-Mistral-7B
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.13.2
last-checkpoint/adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "unsloth/OpenHermes-2.5-Mistral-7B",
5
+ "bias": "none",
6
+ "fan_in_fan_out": null,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0.05,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 8,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "k_proj",
24
+ "up_proj",
25
+ "down_proj",
26
+ "q_proj",
27
+ "gate_proj",
28
+ "v_proj",
29
+ "o_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "use_dora": false,
33
+ "use_rslora": false
34
+ }
last-checkpoint/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ec5c825b43bbc6bb7cd09f76689d876e75953a6d1fbaa30fedef9fd963efcfe
3
+ size 83945296
last-checkpoint/added_tokens.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "<|im_end|>": 32000,
3
+ "<|im_start|>": 32001
4
+ }
last-checkpoint/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d32c126647bf298d773e959f33764ac33fc97e6ded37422c897310d84c9bfba
3
+ size 43123028
last-checkpoint/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8359c45b2046e610bef352743a42432f3c1fd9f8c9db27fba49075c95b07823
3
+ size 14244
last-checkpoint/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c09a30f43b8b0ea81cea46f36d3c705dcbf107d423dcc0433a9a5d4d300bbdca
3
+ size 1064
last-checkpoint/special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|im_end|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
last-checkpoint/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
last-checkpoint/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
last-checkpoint/tokenizer_config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": true,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<unk>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "2": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": true
29
+ },
30
+ "32000": {
31
+ "content": "<|im_end|>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false,
36
+ "special": true
37
+ },
38
+ "32001": {
39
+ "content": "<|im_start|>",
40
+ "lstrip": false,
41
+ "normalized": false,
42
+ "rstrip": false,
43
+ "single_word": false,
44
+ "special": true
45
+ }
46
+ },
47
+ "additional_special_tokens": [],
48
+ "bos_token": "<s>",
49
+ "chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{% endif %}",
50
+ "clean_up_tokenization_spaces": false,
51
+ "eos_token": "<|im_end|>",
52
+ "legacy": true,
53
+ "model_max_length": 32768,
54
+ "pad_token": "<unk>",
55
+ "padding_side": "right",
56
+ "sp_model_kwargs": {},
57
+ "spaces_between_special_tokens": false,
58
+ "tokenizer_class": "LlamaTokenizer",
59
+ "trust_remote_code": false,
60
+ "unk_token": "<unk>",
61
+ "use_default_system_prompt": true,
62
+ "use_fast": true
63
+ }
last-checkpoint/trainer_state.json ADDED
@@ -0,0 +1,2485 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.06824198450828513,
5
+ "eval_steps": 348,
6
+ "global_step": 348,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.00019609765663300324,
13
+ "grad_norm": 20.33372688293457,
14
+ "learning_rate": 2e-05,
15
+ "loss": 3.0843,
16
+ "step": 1
17
+ },
18
+ {
19
+ "epoch": 0.00019609765663300324,
20
+ "eval_loss": 1.1017773151397705,
21
+ "eval_runtime": 79.9135,
22
+ "eval_samples_per_second": 26.879,
23
+ "eval_steps_per_second": 13.44,
24
+ "step": 1
25
+ },
26
+ {
27
+ "epoch": 0.0003921953132660065,
28
+ "grad_norm": 19.32895278930664,
29
+ "learning_rate": 4e-05,
30
+ "loss": 3.2221,
31
+ "step": 2
32
+ },
33
+ {
34
+ "epoch": 0.0005882929698990097,
35
+ "grad_norm": 18.6882266998291,
36
+ "learning_rate": 6e-05,
37
+ "loss": 3.8951,
38
+ "step": 3
39
+ },
40
+ {
41
+ "epoch": 0.000784390626532013,
42
+ "grad_norm": 43.008060455322266,
43
+ "learning_rate": 8e-05,
44
+ "loss": 5.167,
45
+ "step": 4
46
+ },
47
+ {
48
+ "epoch": 0.0009804882831650162,
49
+ "grad_norm": 21.642993927001953,
50
+ "learning_rate": 0.0001,
51
+ "loss": 3.1304,
52
+ "step": 5
53
+ },
54
+ {
55
+ "epoch": 0.0011765859397980193,
56
+ "grad_norm": 29.79266929626465,
57
+ "learning_rate": 0.00012,
58
+ "loss": 4.5153,
59
+ "step": 6
60
+ },
61
+ {
62
+ "epoch": 0.0013726835964310226,
63
+ "grad_norm": 25.503681182861328,
64
+ "learning_rate": 0.00014,
65
+ "loss": 3.8083,
66
+ "step": 7
67
+ },
68
+ {
69
+ "epoch": 0.001568781253064026,
70
+ "grad_norm": 32.35524368286133,
71
+ "learning_rate": 0.00016,
72
+ "loss": 4.253,
73
+ "step": 8
74
+ },
75
+ {
76
+ "epoch": 0.0017648789096970292,
77
+ "grad_norm": 21.053390502929688,
78
+ "learning_rate": 0.00018,
79
+ "loss": 3.3757,
80
+ "step": 9
81
+ },
82
+ {
83
+ "epoch": 0.0019609765663300325,
84
+ "grad_norm": 25.7067928314209,
85
+ "learning_rate": 0.0002,
86
+ "loss": 3.2484,
87
+ "step": 10
88
+ },
89
+ {
90
+ "epoch": 0.0021570742229630358,
91
+ "grad_norm": 22.57227897644043,
92
+ "learning_rate": 0.00019999974049780868,
93
+ "loss": 2.8378,
94
+ "step": 11
95
+ },
96
+ {
97
+ "epoch": 0.0023531718795960386,
98
+ "grad_norm": 19.06597900390625,
99
+ "learning_rate": 0.00019999896199258152,
100
+ "loss": 3.231,
101
+ "step": 12
102
+ },
103
+ {
104
+ "epoch": 0.002549269536229042,
105
+ "grad_norm": 17.590620040893555,
106
+ "learning_rate": 0.000199997664488359,
107
+ "loss": 2.2391,
108
+ "step": 13
109
+ },
110
+ {
111
+ "epoch": 0.002745367192862045,
112
+ "grad_norm": 8.627043724060059,
113
+ "learning_rate": 0.00019999584799187522,
114
+ "loss": 1.7095,
115
+ "step": 14
116
+ },
117
+ {
118
+ "epoch": 0.0029414648494950485,
119
+ "grad_norm": 21.60858917236328,
120
+ "learning_rate": 0.0001999935125125579,
121
+ "loss": 3.9299,
122
+ "step": 15
123
+ },
124
+ {
125
+ "epoch": 0.003137562506128052,
126
+ "grad_norm": 8.075380325317383,
127
+ "learning_rate": 0.00019999065806252829,
128
+ "loss": 1.7939,
129
+ "step": 16
130
+ },
131
+ {
132
+ "epoch": 0.003333660162761055,
133
+ "grad_norm": 11.393594741821289,
134
+ "learning_rate": 0.00019998728465660105,
135
+ "loss": 1.601,
136
+ "step": 17
137
+ },
138
+ {
139
+ "epoch": 0.0035297578193940584,
140
+ "grad_norm": 8.256339073181152,
141
+ "learning_rate": 0.00019998339231228434,
142
+ "loss": 3.1556,
143
+ "step": 18
144
+ },
145
+ {
146
+ "epoch": 0.0037258554760270617,
147
+ "grad_norm": 20.03615951538086,
148
+ "learning_rate": 0.0001999789810497796,
149
+ "loss": 2.0883,
150
+ "step": 19
151
+ },
152
+ {
153
+ "epoch": 0.003921953132660065,
154
+ "grad_norm": 10.166353225708008,
155
+ "learning_rate": 0.0001999740508919815,
156
+ "loss": 3.5616,
157
+ "step": 20
158
+ },
159
+ {
160
+ "epoch": 0.004118050789293068,
161
+ "grad_norm": 15.80553913116455,
162
+ "learning_rate": 0.0001999686018644777,
163
+ "loss": 3.0344,
164
+ "step": 21
165
+ },
166
+ {
167
+ "epoch": 0.0043141484459260715,
168
+ "grad_norm": 7.451974391937256,
169
+ "learning_rate": 0.00019996263399554897,
170
+ "loss": 2.1049,
171
+ "step": 22
172
+ },
173
+ {
174
+ "epoch": 0.004510246102559075,
175
+ "grad_norm": 5.434274673461914,
176
+ "learning_rate": 0.00019995614731616875,
177
+ "loss": 2.3178,
178
+ "step": 23
179
+ },
180
+ {
181
+ "epoch": 0.004706343759192077,
182
+ "grad_norm": 10.594315528869629,
183
+ "learning_rate": 0.00019994914186000328,
184
+ "loss": 1.7096,
185
+ "step": 24
186
+ },
187
+ {
188
+ "epoch": 0.0049024414158250805,
189
+ "grad_norm": 5.348718166351318,
190
+ "learning_rate": 0.0001999416176634111,
191
+ "loss": 2.695,
192
+ "step": 25
193
+ },
194
+ {
195
+ "epoch": 0.005098539072458084,
196
+ "grad_norm": 17.776073455810547,
197
+ "learning_rate": 0.00019993357476544312,
198
+ "loss": 1.7411,
199
+ "step": 26
200
+ },
201
+ {
202
+ "epoch": 0.005294636729091087,
203
+ "grad_norm": 10.051606178283691,
204
+ "learning_rate": 0.0001999250132078424,
205
+ "loss": 2.6161,
206
+ "step": 27
207
+ },
208
+ {
209
+ "epoch": 0.00549073438572409,
210
+ "grad_norm": 26.03020668029785,
211
+ "learning_rate": 0.00019991593303504376,
212
+ "loss": 3.3977,
213
+ "step": 28
214
+ },
215
+ {
216
+ "epoch": 0.005686832042357094,
217
+ "grad_norm": 10.213540077209473,
218
+ "learning_rate": 0.00019990633429417363,
219
+ "loss": 1.2442,
220
+ "step": 29
221
+ },
222
+ {
223
+ "epoch": 0.005882929698990097,
224
+ "grad_norm": 11.69288444519043,
225
+ "learning_rate": 0.00019989621703505,
226
+ "loss": 1.4702,
227
+ "step": 30
228
+ },
229
+ {
230
+ "epoch": 0.0060790273556231,
231
+ "grad_norm": 4.343452453613281,
232
+ "learning_rate": 0.00019988558131018186,
233
+ "loss": 1.0779,
234
+ "step": 31
235
+ },
236
+ {
237
+ "epoch": 0.006275125012256104,
238
+ "grad_norm": 9.106976509094238,
239
+ "learning_rate": 0.00019987442717476906,
240
+ "loss": 2.5887,
241
+ "step": 32
242
+ },
243
+ {
244
+ "epoch": 0.006471222668889107,
245
+ "grad_norm": 17.658370971679688,
246
+ "learning_rate": 0.00019986275468670205,
247
+ "loss": 2.2258,
248
+ "step": 33
249
+ },
250
+ {
251
+ "epoch": 0.00666732032552211,
252
+ "grad_norm": 6.7451090812683105,
253
+ "learning_rate": 0.00019985056390656162,
254
+ "loss": 1.7206,
255
+ "step": 34
256
+ },
257
+ {
258
+ "epoch": 0.0068634179821551134,
259
+ "grad_norm": 28.07065200805664,
260
+ "learning_rate": 0.00019983785489761837,
261
+ "loss": 2.7356,
262
+ "step": 35
263
+ },
264
+ {
265
+ "epoch": 0.007059515638788117,
266
+ "grad_norm": 11.387879371643066,
267
+ "learning_rate": 0.00019982462772583266,
268
+ "loss": 1.973,
269
+ "step": 36
270
+ },
271
+ {
272
+ "epoch": 0.00725561329542112,
273
+ "grad_norm": 9.64372444152832,
274
+ "learning_rate": 0.00019981088245985408,
275
+ "loss": 2.7339,
276
+ "step": 37
277
+ },
278
+ {
279
+ "epoch": 0.007451710952054123,
280
+ "grad_norm": 9.302544593811035,
281
+ "learning_rate": 0.00019979661917102115,
282
+ "loss": 1.7498,
283
+ "step": 38
284
+ },
285
+ {
286
+ "epoch": 0.007647808608687127,
287
+ "grad_norm": 15.064400672912598,
288
+ "learning_rate": 0.000199781837933361,
289
+ "loss": 3.0109,
290
+ "step": 39
291
+ },
292
+ {
293
+ "epoch": 0.00784390626532013,
294
+ "grad_norm": 7.281099319458008,
295
+ "learning_rate": 0.00019976653882358884,
296
+ "loss": 1.3118,
297
+ "step": 40
298
+ },
299
+ {
300
+ "epoch": 0.008040003921953132,
301
+ "grad_norm": 6.4474873542785645,
302
+ "learning_rate": 0.0001997507219211078,
303
+ "loss": 1.408,
304
+ "step": 41
305
+ },
306
+ {
307
+ "epoch": 0.008236101578586136,
308
+ "grad_norm": 13.101079940795898,
309
+ "learning_rate": 0.00019973438730800822,
310
+ "loss": 2.3367,
311
+ "step": 42
312
+ },
313
+ {
314
+ "epoch": 0.008432199235219139,
315
+ "grad_norm": 5.951049327850342,
316
+ "learning_rate": 0.00019971753506906753,
317
+ "loss": 0.9101,
318
+ "step": 43
319
+ },
320
+ {
321
+ "epoch": 0.008628296891852143,
322
+ "grad_norm": 11.212276458740234,
323
+ "learning_rate": 0.00019970016529174947,
324
+ "loss": 2.7058,
325
+ "step": 44
326
+ },
327
+ {
328
+ "epoch": 0.008824394548485145,
329
+ "grad_norm": 8.68136978149414,
330
+ "learning_rate": 0.0001996822780662041,
331
+ "loss": 2.0276,
332
+ "step": 45
333
+ },
334
+ {
335
+ "epoch": 0.00902049220511815,
336
+ "grad_norm": 17.70038414001465,
337
+ "learning_rate": 0.00019966387348526683,
338
+ "loss": 2.7989,
339
+ "step": 46
340
+ },
341
+ {
342
+ "epoch": 0.009216589861751152,
343
+ "grad_norm": 10.247598648071289,
344
+ "learning_rate": 0.00019964495164445824,
345
+ "loss": 1.9618,
346
+ "step": 47
347
+ },
348
+ {
349
+ "epoch": 0.009412687518384154,
350
+ "grad_norm": 10.378255844116211,
351
+ "learning_rate": 0.0001996255126419835,
352
+ "loss": 1.8003,
353
+ "step": 48
354
+ },
355
+ {
356
+ "epoch": 0.009608785175017159,
357
+ "grad_norm": 31.620820999145508,
358
+ "learning_rate": 0.0001996055565787319,
359
+ "loss": 2.8785,
360
+ "step": 49
361
+ },
362
+ {
363
+ "epoch": 0.009804882831650161,
364
+ "grad_norm": 9.976147651672363,
365
+ "learning_rate": 0.0001995850835582763,
366
+ "loss": 2.5605,
367
+ "step": 50
368
+ },
369
+ {
370
+ "epoch": 0.010000980488283165,
371
+ "grad_norm": 11.751899719238281,
372
+ "learning_rate": 0.00019956409368687258,
373
+ "loss": 2.7556,
374
+ "step": 51
375
+ },
376
+ {
377
+ "epoch": 0.010197078144916168,
378
+ "grad_norm": 15.828932762145996,
379
+ "learning_rate": 0.000199542587073459,
380
+ "loss": 2.7773,
381
+ "step": 52
382
+ },
383
+ {
384
+ "epoch": 0.010393175801549172,
385
+ "grad_norm": 10.772979736328125,
386
+ "learning_rate": 0.00019952056382965597,
387
+ "loss": 1.9553,
388
+ "step": 53
389
+ },
390
+ {
391
+ "epoch": 0.010589273458182174,
392
+ "grad_norm": 10.821427345275879,
393
+ "learning_rate": 0.00019949802406976495,
394
+ "loss": 1.8528,
395
+ "step": 54
396
+ },
397
+ {
398
+ "epoch": 0.010785371114815178,
399
+ "grad_norm": 7.228662490844727,
400
+ "learning_rate": 0.00019947496791076837,
401
+ "loss": 1.1844,
402
+ "step": 55
403
+ },
404
+ {
405
+ "epoch": 0.01098146877144818,
406
+ "grad_norm": 7.164773941040039,
407
+ "learning_rate": 0.00019945139547232872,
408
+ "loss": 1.0291,
409
+ "step": 56
410
+ },
411
+ {
412
+ "epoch": 0.011177566428081185,
413
+ "grad_norm": 13.927733421325684,
414
+ "learning_rate": 0.0001994273068767879,
415
+ "loss": 1.5417,
416
+ "step": 57
417
+ },
418
+ {
419
+ "epoch": 0.011373664084714187,
420
+ "grad_norm": 10.366493225097656,
421
+ "learning_rate": 0.00019940270224916688,
422
+ "loss": 1.5122,
423
+ "step": 58
424
+ },
425
+ {
426
+ "epoch": 0.011569761741347192,
427
+ "grad_norm": 11.2214994430542,
428
+ "learning_rate": 0.00019937758171716468,
429
+ "loss": 1.6003,
430
+ "step": 59
431
+ },
432
+ {
433
+ "epoch": 0.011765859397980194,
434
+ "grad_norm": 14.360090255737305,
435
+ "learning_rate": 0.000199351945411158,
436
+ "loss": 1.5651,
437
+ "step": 60
438
+ },
439
+ {
440
+ "epoch": 0.011961957054613198,
441
+ "grad_norm": 17.97150993347168,
442
+ "learning_rate": 0.00019932579346420038,
443
+ "loss": 1.6064,
444
+ "step": 61
445
+ },
446
+ {
447
+ "epoch": 0.0121580547112462,
448
+ "grad_norm": 10.190518379211426,
449
+ "learning_rate": 0.00019929912601202151,
450
+ "loss": 1.9151,
451
+ "step": 62
452
+ },
453
+ {
454
+ "epoch": 0.012354152367879203,
455
+ "grad_norm": 13.573248863220215,
456
+ "learning_rate": 0.00019927194319302677,
457
+ "loss": 4.0602,
458
+ "step": 63
459
+ },
460
+ {
461
+ "epoch": 0.012550250024512207,
462
+ "grad_norm": 16.919841766357422,
463
+ "learning_rate": 0.00019924424514829606,
464
+ "loss": 2.8292,
465
+ "step": 64
466
+ },
467
+ {
468
+ "epoch": 0.01274634768114521,
469
+ "grad_norm": 58.470252990722656,
470
+ "learning_rate": 0.00019921603202158354,
471
+ "loss": 1.9637,
472
+ "step": 65
473
+ },
474
+ {
475
+ "epoch": 0.012942445337778214,
476
+ "grad_norm": 18.334800720214844,
477
+ "learning_rate": 0.00019918730395931649,
478
+ "loss": 2.5609,
479
+ "step": 66
480
+ },
481
+ {
482
+ "epoch": 0.013138542994411216,
483
+ "grad_norm": 12.280759811401367,
484
+ "learning_rate": 0.00019915806111059486,
485
+ "loss": 1.2495,
486
+ "step": 67
487
+ },
488
+ {
489
+ "epoch": 0.01333464065104422,
490
+ "grad_norm": 8.015874862670898,
491
+ "learning_rate": 0.0001991283036271903,
492
+ "loss": 1.505,
493
+ "step": 68
494
+ },
495
+ {
496
+ "epoch": 0.013530738307677223,
497
+ "grad_norm": 7.713284969329834,
498
+ "learning_rate": 0.0001990980316635455,
499
+ "loss": 2.3898,
500
+ "step": 69
501
+ },
502
+ {
503
+ "epoch": 0.013726835964310227,
504
+ "grad_norm": 18.01800537109375,
505
+ "learning_rate": 0.00019906724537677316,
506
+ "loss": 3.0263,
507
+ "step": 70
508
+ },
509
+ {
510
+ "epoch": 0.01392293362094323,
511
+ "grad_norm": 21.270421981811523,
512
+ "learning_rate": 0.00019903594492665558,
513
+ "loss": 3.2547,
514
+ "step": 71
515
+ },
516
+ {
517
+ "epoch": 0.014119031277576233,
518
+ "grad_norm": 21.60205841064453,
519
+ "learning_rate": 0.0001990041304756434,
520
+ "loss": 2.577,
521
+ "step": 72
522
+ },
523
+ {
524
+ "epoch": 0.014315128934209236,
525
+ "grad_norm": 10.01419734954834,
526
+ "learning_rate": 0.00019897180218885507,
527
+ "loss": 1.9092,
528
+ "step": 73
529
+ },
530
+ {
531
+ "epoch": 0.01451122659084224,
532
+ "grad_norm": 14.10943603515625,
533
+ "learning_rate": 0.00019893896023407578,
534
+ "loss": 2.2377,
535
+ "step": 74
536
+ },
537
+ {
538
+ "epoch": 0.014707324247475242,
539
+ "grad_norm": 11.310667037963867,
540
+ "learning_rate": 0.0001989056047817567,
541
+ "loss": 1.6645,
542
+ "step": 75
543
+ },
544
+ {
545
+ "epoch": 0.014903421904108247,
546
+ "grad_norm": 6.586666107177734,
547
+ "learning_rate": 0.0001988717360050141,
548
+ "loss": 2.2651,
549
+ "step": 76
550
+ },
551
+ {
552
+ "epoch": 0.015099519560741249,
553
+ "grad_norm": 4.402716159820557,
554
+ "learning_rate": 0.00019883735407962846,
555
+ "loss": 1.3483,
556
+ "step": 77
557
+ },
558
+ {
559
+ "epoch": 0.015295617217374253,
560
+ "grad_norm": 9.384387016296387,
561
+ "learning_rate": 0.00019880245918404342,
562
+ "loss": 2.6391,
563
+ "step": 78
564
+ },
565
+ {
566
+ "epoch": 0.015491714874007256,
567
+ "grad_norm": 6.753894329071045,
568
+ "learning_rate": 0.000198767051499365,
569
+ "loss": 2.9391,
570
+ "step": 79
571
+ },
572
+ {
573
+ "epoch": 0.01568781253064026,
574
+ "grad_norm": 6.399787902832031,
575
+ "learning_rate": 0.00019873113120936074,
576
+ "loss": 3.7452,
577
+ "step": 80
578
+ },
579
+ {
580
+ "epoch": 0.01588391018727326,
581
+ "grad_norm": 8.880107879638672,
582
+ "learning_rate": 0.00019869469850045842,
583
+ "loss": 1.2771,
584
+ "step": 81
585
+ },
586
+ {
587
+ "epoch": 0.016080007843906265,
588
+ "grad_norm": 12.630661964416504,
589
+ "learning_rate": 0.00019865775356174545,
590
+ "loss": 2.2072,
591
+ "step": 82
592
+ },
593
+ {
594
+ "epoch": 0.01627610550053927,
595
+ "grad_norm": 7.974503993988037,
596
+ "learning_rate": 0.00019862029658496762,
597
+ "loss": 1.9795,
598
+ "step": 83
599
+ },
600
+ {
601
+ "epoch": 0.016472203157172273,
602
+ "grad_norm": 50.43594741821289,
603
+ "learning_rate": 0.00019858232776452837,
604
+ "loss": 1.5331,
605
+ "step": 84
606
+ },
607
+ {
608
+ "epoch": 0.016668300813805274,
609
+ "grad_norm": 7.273484230041504,
610
+ "learning_rate": 0.00019854384729748746,
611
+ "loss": 2.4005,
612
+ "step": 85
613
+ },
614
+ {
615
+ "epoch": 0.016864398470438278,
616
+ "grad_norm": 5.826492786407471,
617
+ "learning_rate": 0.00019850485538356027,
618
+ "loss": 2.1915,
619
+ "step": 86
620
+ },
621
+ {
622
+ "epoch": 0.017060496127071282,
623
+ "grad_norm": 9.881019592285156,
624
+ "learning_rate": 0.0001984653522251165,
625
+ "loss": 2.3309,
626
+ "step": 87
627
+ },
628
+ {
629
+ "epoch": 0.017256593783704286,
630
+ "grad_norm": 9.147713661193848,
631
+ "learning_rate": 0.00019842533802717923,
632
+ "loss": 1.1404,
633
+ "step": 88
634
+ },
635
+ {
636
+ "epoch": 0.017452691440337287,
637
+ "grad_norm": 13.98263931274414,
638
+ "learning_rate": 0.00019838481299742398,
639
+ "loss": 1.2166,
640
+ "step": 89
641
+ },
642
+ {
643
+ "epoch": 0.01764878909697029,
644
+ "grad_norm": 8.206791877746582,
645
+ "learning_rate": 0.0001983437773461774,
646
+ "loss": 2.6039,
647
+ "step": 90
648
+ },
649
+ {
650
+ "epoch": 0.017844886753603295,
651
+ "grad_norm": 10.445443153381348,
652
+ "learning_rate": 0.00019830223128641637,
653
+ "loss": 2.3554,
654
+ "step": 91
655
+ },
656
+ {
657
+ "epoch": 0.0180409844102363,
658
+ "grad_norm": 11.756292343139648,
659
+ "learning_rate": 0.00019826017503376666,
660
+ "loss": 1.7371,
661
+ "step": 92
662
+ },
663
+ {
664
+ "epoch": 0.0182370820668693,
665
+ "grad_norm": 7.509032249450684,
666
+ "learning_rate": 0.00019821760880650214,
667
+ "loss": 1.389,
668
+ "step": 93
669
+ },
670
+ {
671
+ "epoch": 0.018433179723502304,
672
+ "grad_norm": 8.619280815124512,
673
+ "learning_rate": 0.00019817453282554333,
674
+ "loss": 1.6818,
675
+ "step": 94
676
+ },
677
+ {
678
+ "epoch": 0.01862927738013531,
679
+ "grad_norm": 9.11640739440918,
680
+ "learning_rate": 0.00019813094731445654,
681
+ "loss": 1.631,
682
+ "step": 95
683
+ },
684
+ {
685
+ "epoch": 0.01882537503676831,
686
+ "grad_norm": 14.109521865844727,
687
+ "learning_rate": 0.00019808685249945245,
688
+ "loss": 2.0497,
689
+ "step": 96
690
+ },
691
+ {
692
+ "epoch": 0.019021472693401313,
693
+ "grad_norm": 10.804281234741211,
694
+ "learning_rate": 0.00019804224860938506,
695
+ "loss": 2.2364,
696
+ "step": 97
697
+ },
698
+ {
699
+ "epoch": 0.019217570350034317,
700
+ "grad_norm": 7.363731384277344,
701
+ "learning_rate": 0.0001979971358757505,
702
+ "loss": 1.0967,
703
+ "step": 98
704
+ },
705
+ {
706
+ "epoch": 0.01941366800666732,
707
+ "grad_norm": 15.269912719726562,
708
+ "learning_rate": 0.0001979515145326859,
709
+ "loss": 2.8752,
710
+ "step": 99
711
+ },
712
+ {
713
+ "epoch": 0.019609765663300322,
714
+ "grad_norm": 5.457535266876221,
715
+ "learning_rate": 0.000197905384816968,
716
+ "loss": 1.7098,
717
+ "step": 100
718
+ },
719
+ {
720
+ "epoch": 0.019805863319933326,
721
+ "grad_norm": 4.689967632293701,
722
+ "learning_rate": 0.00019785874696801202,
723
+ "loss": 2.2133,
724
+ "step": 101
725
+ },
726
+ {
727
+ "epoch": 0.02000196097656633,
728
+ "grad_norm": 10.993409156799316,
729
+ "learning_rate": 0.00019781160122787046,
730
+ "loss": 2.314,
731
+ "step": 102
732
+ },
733
+ {
734
+ "epoch": 0.020198058633199335,
735
+ "grad_norm": 8.199251174926758,
736
+ "learning_rate": 0.00019776394784123177,
737
+ "loss": 2.5164,
738
+ "step": 103
739
+ },
740
+ {
741
+ "epoch": 0.020394156289832335,
742
+ "grad_norm": 15.144885063171387,
743
+ "learning_rate": 0.00019771578705541916,
744
+ "loss": 2.0058,
745
+ "step": 104
746
+ },
747
+ {
748
+ "epoch": 0.02059025394646534,
749
+ "grad_norm": 5.252450466156006,
750
+ "learning_rate": 0.00019766711912038915,
751
+ "loss": 1.7012,
752
+ "step": 105
753
+ },
754
+ {
755
+ "epoch": 0.020786351603098344,
756
+ "grad_norm": 8.265049934387207,
757
+ "learning_rate": 0.0001976179442887305,
758
+ "loss": 1.8646,
759
+ "step": 106
760
+ },
761
+ {
762
+ "epoch": 0.020982449259731348,
763
+ "grad_norm": 8.365408897399902,
764
+ "learning_rate": 0.00019756826281566272,
765
+ "loss": 1.9615,
766
+ "step": 107
767
+ },
768
+ {
769
+ "epoch": 0.02117854691636435,
770
+ "grad_norm": 7.514213562011719,
771
+ "learning_rate": 0.00019751807495903484,
772
+ "loss": 1.4897,
773
+ "step": 108
774
+ },
775
+ {
776
+ "epoch": 0.021374644572997353,
777
+ "grad_norm": 15.234655380249023,
778
+ "learning_rate": 0.00019746738097932407,
779
+ "loss": 2.0467,
780
+ "step": 109
781
+ },
782
+ {
783
+ "epoch": 0.021570742229630357,
784
+ "grad_norm": 6.856448650360107,
785
+ "learning_rate": 0.0001974161811396343,
786
+ "loss": 1.4492,
787
+ "step": 110
788
+ },
789
+ {
790
+ "epoch": 0.021766839886263357,
791
+ "grad_norm": 7.893224716186523,
792
+ "learning_rate": 0.00019736447570569503,
793
+ "loss": 1.919,
794
+ "step": 111
795
+ },
796
+ {
797
+ "epoch": 0.02196293754289636,
798
+ "grad_norm": 8.966511726379395,
799
+ "learning_rate": 0.0001973122649458597,
800
+ "loss": 2.4484,
801
+ "step": 112
802
+ },
803
+ {
804
+ "epoch": 0.022159035199529366,
805
+ "grad_norm": 7.631579875946045,
806
+ "learning_rate": 0.00019725954913110442,
807
+ "loss": 1.4992,
808
+ "step": 113
809
+ },
810
+ {
811
+ "epoch": 0.02235513285616237,
812
+ "grad_norm": 7.418518543243408,
813
+ "learning_rate": 0.0001972063285350266,
814
+ "loss": 0.8401,
815
+ "step": 114
816
+ },
817
+ {
818
+ "epoch": 0.02255123051279537,
819
+ "grad_norm": 7.739930629730225,
820
+ "learning_rate": 0.00019715260343384347,
821
+ "loss": 2.0713,
822
+ "step": 115
823
+ },
824
+ {
825
+ "epoch": 0.022747328169428375,
826
+ "grad_norm": 6.441893100738525,
827
+ "learning_rate": 0.00019709837410639063,
828
+ "loss": 1.4438,
829
+ "step": 116
830
+ },
831
+ {
832
+ "epoch": 0.02294342582606138,
833
+ "grad_norm": 6.008083820343018,
834
+ "learning_rate": 0.0001970436408341207,
835
+ "loss": 1.3503,
836
+ "step": 117
837
+ },
838
+ {
839
+ "epoch": 0.023139523482694383,
840
+ "grad_norm": 7.100820541381836,
841
+ "learning_rate": 0.00019698840390110176,
842
+ "loss": 1.4726,
843
+ "step": 118
844
+ },
845
+ {
846
+ "epoch": 0.023335621139327384,
847
+ "grad_norm": 10.213756561279297,
848
+ "learning_rate": 0.0001969326635940159,
849
+ "loss": 0.8107,
850
+ "step": 119
851
+ },
852
+ {
853
+ "epoch": 0.023531718795960388,
854
+ "grad_norm": 5.251387119293213,
855
+ "learning_rate": 0.00019687642020215775,
856
+ "loss": 1.5542,
857
+ "step": 120
858
+ },
859
+ {
860
+ "epoch": 0.023727816452593392,
861
+ "grad_norm": 6.100740432739258,
862
+ "learning_rate": 0.00019681967401743297,
863
+ "loss": 1.2512,
864
+ "step": 121
865
+ },
866
+ {
867
+ "epoch": 0.023923914109226396,
868
+ "grad_norm": 7.356696128845215,
869
+ "learning_rate": 0.00019676242533435678,
870
+ "loss": 2.4725,
871
+ "step": 122
872
+ },
873
+ {
874
+ "epoch": 0.024120011765859397,
875
+ "grad_norm": 11.542431831359863,
876
+ "learning_rate": 0.00019670467445005233,
877
+ "loss": 3.0307,
878
+ "step": 123
879
+ },
880
+ {
881
+ "epoch": 0.0243161094224924,
882
+ "grad_norm": 12.166086196899414,
883
+ "learning_rate": 0.00019664642166424928,
884
+ "loss": 1.2784,
885
+ "step": 124
886
+ },
887
+ {
888
+ "epoch": 0.024512207079125405,
889
+ "grad_norm": 5.222433090209961,
890
+ "learning_rate": 0.00019658766727928206,
891
+ "loss": 1.1759,
892
+ "step": 125
893
+ },
894
+ {
895
+ "epoch": 0.024708304735758406,
896
+ "grad_norm": 4.77174711227417,
897
+ "learning_rate": 0.00019652841160008858,
898
+ "loss": 1.1041,
899
+ "step": 126
900
+ },
901
+ {
902
+ "epoch": 0.02490440239239141,
903
+ "grad_norm": 4.879274368286133,
904
+ "learning_rate": 0.0001964686549342084,
905
+ "loss": 2.6326,
906
+ "step": 127
907
+ },
908
+ {
909
+ "epoch": 0.025100500049024414,
910
+ "grad_norm": 14.171689987182617,
911
+ "learning_rate": 0.00019640839759178116,
912
+ "loss": 3.4144,
913
+ "step": 128
914
+ },
915
+ {
916
+ "epoch": 0.02529659770565742,
917
+ "grad_norm": 7.598373889923096,
918
+ "learning_rate": 0.00019634763988554522,
919
+ "loss": 2.0596,
920
+ "step": 129
921
+ },
922
+ {
923
+ "epoch": 0.02549269536229042,
924
+ "grad_norm": 6.88770866394043,
925
+ "learning_rate": 0.00019628638213083565,
926
+ "loss": 1.4691,
927
+ "step": 130
928
+ },
929
+ {
930
+ "epoch": 0.025688793018923423,
931
+ "grad_norm": 7.128096580505371,
932
+ "learning_rate": 0.00019622462464558295,
933
+ "loss": 1.3307,
934
+ "step": 131
935
+ },
936
+ {
937
+ "epoch": 0.025884890675556427,
938
+ "grad_norm": 6.430881500244141,
939
+ "learning_rate": 0.00019616236775031113,
940
+ "loss": 0.9491,
941
+ "step": 132
942
+ },
943
+ {
944
+ "epoch": 0.02608098833218943,
945
+ "grad_norm": 9.912070274353027,
946
+ "learning_rate": 0.00019609961176813624,
947
+ "loss": 2.5006,
948
+ "step": 133
949
+ },
950
+ {
951
+ "epoch": 0.026277085988822432,
952
+ "grad_norm": 8.550467491149902,
953
+ "learning_rate": 0.0001960363570247645,
954
+ "loss": 2.4952,
955
+ "step": 134
956
+ },
957
+ {
958
+ "epoch": 0.026473183645455436,
959
+ "grad_norm": 4.201476573944092,
960
+ "learning_rate": 0.0001959726038484909,
961
+ "loss": 0.9033,
962
+ "step": 135
963
+ },
964
+ {
965
+ "epoch": 0.02666928130208844,
966
+ "grad_norm": 5.774847984313965,
967
+ "learning_rate": 0.00019590835257019714,
968
+ "loss": 2.1291,
969
+ "step": 136
970
+ },
971
+ {
972
+ "epoch": 0.026865378958721445,
973
+ "grad_norm": 8.179195404052734,
974
+ "learning_rate": 0.00019584360352335023,
975
+ "loss": 2.7527,
976
+ "step": 137
977
+ },
978
+ {
979
+ "epoch": 0.027061476615354445,
980
+ "grad_norm": 15.658841133117676,
981
+ "learning_rate": 0.0001957783570440005,
982
+ "loss": 1.8304,
983
+ "step": 138
984
+ },
985
+ {
986
+ "epoch": 0.02725757427198745,
987
+ "grad_norm": 5.7399163246154785,
988
+ "learning_rate": 0.0001957126134707801,
989
+ "loss": 1.7071,
990
+ "step": 139
991
+ },
992
+ {
993
+ "epoch": 0.027453671928620454,
994
+ "grad_norm": 5.0817389488220215,
995
+ "learning_rate": 0.00019564637314490108,
996
+ "loss": 1.8933,
997
+ "step": 140
998
+ },
999
+ {
1000
+ "epoch": 0.027649769585253458,
1001
+ "grad_norm": 5.634946346282959,
1002
+ "learning_rate": 0.0001955796364101535,
1003
+ "loss": 1.7343,
1004
+ "step": 141
1005
+ },
1006
+ {
1007
+ "epoch": 0.02784586724188646,
1008
+ "grad_norm": 6.406938552856445,
1009
+ "learning_rate": 0.00019551240361290407,
1010
+ "loss": 2.3013,
1011
+ "step": 142
1012
+ },
1013
+ {
1014
+ "epoch": 0.028041964898519463,
1015
+ "grad_norm": 8.239458084106445,
1016
+ "learning_rate": 0.00019544467510209388,
1017
+ "loss": 1.2177,
1018
+ "step": 143
1019
+ },
1020
+ {
1021
+ "epoch": 0.028238062555152467,
1022
+ "grad_norm": 11.887965202331543,
1023
+ "learning_rate": 0.0001953764512292369,
1024
+ "loss": 2.4312,
1025
+ "step": 144
1026
+ },
1027
+ {
1028
+ "epoch": 0.028434160211785468,
1029
+ "grad_norm": 7.482359409332275,
1030
+ "learning_rate": 0.00019530773234841803,
1031
+ "loss": 1.1083,
1032
+ "step": 145
1033
+ },
1034
+ {
1035
+ "epoch": 0.028630257868418472,
1036
+ "grad_norm": 8.86729621887207,
1037
+ "learning_rate": 0.00019523851881629126,
1038
+ "loss": 1.6451,
1039
+ "step": 146
1040
+ },
1041
+ {
1042
+ "epoch": 0.028826355525051476,
1043
+ "grad_norm": 7.395509719848633,
1044
+ "learning_rate": 0.0001951688109920778,
1045
+ "loss": 1.31,
1046
+ "step": 147
1047
+ },
1048
+ {
1049
+ "epoch": 0.02902245318168448,
1050
+ "grad_norm": 4.955163955688477,
1051
+ "learning_rate": 0.00019509860923756442,
1052
+ "loss": 2.5206,
1053
+ "step": 148
1054
+ },
1055
+ {
1056
+ "epoch": 0.02921855083831748,
1057
+ "grad_norm": 5.034746170043945,
1058
+ "learning_rate": 0.00019502791391710125,
1059
+ "loss": 0.9336,
1060
+ "step": 149
1061
+ },
1062
+ {
1063
+ "epoch": 0.029414648494950485,
1064
+ "grad_norm": 12.375234603881836,
1065
+ "learning_rate": 0.00019495672539760007,
1066
+ "loss": 2.1276,
1067
+ "step": 150
1068
+ },
1069
+ {
1070
+ "epoch": 0.02961074615158349,
1071
+ "grad_norm": 5.832932949066162,
1072
+ "learning_rate": 0.00019488504404853248,
1073
+ "loss": 1.3252,
1074
+ "step": 151
1075
+ },
1076
+ {
1077
+ "epoch": 0.029806843808216493,
1078
+ "grad_norm": 5.934417724609375,
1079
+ "learning_rate": 0.00019481287024192775,
1080
+ "loss": 1.5907,
1081
+ "step": 152
1082
+ },
1083
+ {
1084
+ "epoch": 0.030002941464849494,
1085
+ "grad_norm": 9.238896369934082,
1086
+ "learning_rate": 0.00019474020435237117,
1087
+ "loss": 1.1184,
1088
+ "step": 153
1089
+ },
1090
+ {
1091
+ "epoch": 0.030199039121482498,
1092
+ "grad_norm": 9.787931442260742,
1093
+ "learning_rate": 0.00019466704675700185,
1094
+ "loss": 1.4931,
1095
+ "step": 154
1096
+ },
1097
+ {
1098
+ "epoch": 0.030395136778115502,
1099
+ "grad_norm": 7.260796070098877,
1100
+ "learning_rate": 0.00019459339783551094,
1101
+ "loss": 0.8924,
1102
+ "step": 155
1103
+ },
1104
+ {
1105
+ "epoch": 0.030591234434748506,
1106
+ "grad_norm": 8.712836265563965,
1107
+ "learning_rate": 0.00019451925797013954,
1108
+ "loss": 1.586,
1109
+ "step": 156
1110
+ },
1111
+ {
1112
+ "epoch": 0.030787332091381507,
1113
+ "grad_norm": 11.15104866027832,
1114
+ "learning_rate": 0.00019444462754567682,
1115
+ "loss": 1.5007,
1116
+ "step": 157
1117
+ },
1118
+ {
1119
+ "epoch": 0.03098342974801451,
1120
+ "grad_norm": 7.158255100250244,
1121
+ "learning_rate": 0.00019436950694945798,
1122
+ "loss": 2.4118,
1123
+ "step": 158
1124
+ },
1125
+ {
1126
+ "epoch": 0.031179527404647515,
1127
+ "grad_norm": 11.58385944366455,
1128
+ "learning_rate": 0.00019429389657136213,
1129
+ "loss": 2.1638,
1130
+ "step": 159
1131
+ },
1132
+ {
1133
+ "epoch": 0.03137562506128052,
1134
+ "grad_norm": 7.469117641448975,
1135
+ "learning_rate": 0.00019421779680381054,
1136
+ "loss": 3.0682,
1137
+ "step": 160
1138
+ },
1139
+ {
1140
+ "epoch": 0.031571722717913524,
1141
+ "grad_norm": 10.78966999053955,
1142
+ "learning_rate": 0.00019414120804176426,
1143
+ "loss": 1.1822,
1144
+ "step": 161
1145
+ },
1146
+ {
1147
+ "epoch": 0.03176782037454652,
1148
+ "grad_norm": 9.68694019317627,
1149
+ "learning_rate": 0.00019406413068272238,
1150
+ "loss": 2.5351,
1151
+ "step": 162
1152
+ },
1153
+ {
1154
+ "epoch": 0.031963918031179525,
1155
+ "grad_norm": 11.67428970336914,
1156
+ "learning_rate": 0.00019398656512671972,
1157
+ "loss": 1.9244,
1158
+ "step": 163
1159
+ },
1160
+ {
1161
+ "epoch": 0.03216001568781253,
1162
+ "grad_norm": 12.72513198852539,
1163
+ "learning_rate": 0.00019390851177632497,
1164
+ "loss": 3.2138,
1165
+ "step": 164
1166
+ },
1167
+ {
1168
+ "epoch": 0.03235611334444553,
1169
+ "grad_norm": 8.345921516418457,
1170
+ "learning_rate": 0.00019382997103663838,
1171
+ "loss": 2.6435,
1172
+ "step": 165
1173
+ },
1174
+ {
1175
+ "epoch": 0.03255221100107854,
1176
+ "grad_norm": 7.740304470062256,
1177
+ "learning_rate": 0.0001937509433152899,
1178
+ "loss": 0.8189,
1179
+ "step": 166
1180
+ },
1181
+ {
1182
+ "epoch": 0.03274830865771154,
1183
+ "grad_norm": 9.329862594604492,
1184
+ "learning_rate": 0.0001936714290224368,
1185
+ "loss": 1.4106,
1186
+ "step": 167
1187
+ },
1188
+ {
1189
+ "epoch": 0.032944406314344546,
1190
+ "grad_norm": 7.179844379425049,
1191
+ "learning_rate": 0.00019359142857076176,
1192
+ "loss": 1.8125,
1193
+ "step": 168
1194
+ },
1195
+ {
1196
+ "epoch": 0.03314050397097755,
1197
+ "grad_norm": 7.835447311401367,
1198
+ "learning_rate": 0.00019351094237547066,
1199
+ "loss": 1.6617,
1200
+ "step": 169
1201
+ },
1202
+ {
1203
+ "epoch": 0.03333660162761055,
1204
+ "grad_norm": 6.018518924713135,
1205
+ "learning_rate": 0.0001934299708542904,
1206
+ "loss": 2.4333,
1207
+ "step": 170
1208
+ },
1209
+ {
1210
+ "epoch": 0.03353269928424355,
1211
+ "grad_norm": 8.176468849182129,
1212
+ "learning_rate": 0.00019334851442746664,
1213
+ "loss": 2.5915,
1214
+ "step": 171
1215
+ },
1216
+ {
1217
+ "epoch": 0.033728796940876556,
1218
+ "grad_norm": 8.241739273071289,
1219
+ "learning_rate": 0.00019326657351776186,
1220
+ "loss": 1.666,
1221
+ "step": 172
1222
+ },
1223
+ {
1224
+ "epoch": 0.03392489459750956,
1225
+ "grad_norm": 8.064835548400879,
1226
+ "learning_rate": 0.000193184148550453,
1227
+ "loss": 1.477,
1228
+ "step": 173
1229
+ },
1230
+ {
1231
+ "epoch": 0.034120992254142564,
1232
+ "grad_norm": 5.790217399597168,
1233
+ "learning_rate": 0.00019310123995332917,
1234
+ "loss": 0.7703,
1235
+ "step": 174
1236
+ },
1237
+ {
1238
+ "epoch": 0.03431708991077557,
1239
+ "grad_norm": 9.38430118560791,
1240
+ "learning_rate": 0.00019301784815668974,
1241
+ "loss": 1.5785,
1242
+ "step": 175
1243
+ },
1244
+ {
1245
+ "epoch": 0.03451318756740857,
1246
+ "grad_norm": 8.252826690673828,
1247
+ "learning_rate": 0.00019293397359334167,
1248
+ "loss": 2.1462,
1249
+ "step": 176
1250
+ },
1251
+ {
1252
+ "epoch": 0.03470928522404157,
1253
+ "grad_norm": 12.65652847290039,
1254
+ "learning_rate": 0.00019284961669859766,
1255
+ "loss": 1.3009,
1256
+ "step": 177
1257
+ },
1258
+ {
1259
+ "epoch": 0.034905382880674574,
1260
+ "grad_norm": 6.8490753173828125,
1261
+ "learning_rate": 0.00019276477791027374,
1262
+ "loss": 2.4905,
1263
+ "step": 178
1264
+ },
1265
+ {
1266
+ "epoch": 0.03510148053730758,
1267
+ "grad_norm": 4.2581048011779785,
1268
+ "learning_rate": 0.0001926794576686869,
1269
+ "loss": 0.9042,
1270
+ "step": 179
1271
+ },
1272
+ {
1273
+ "epoch": 0.03529757819394058,
1274
+ "grad_norm": 6.415445327758789,
1275
+ "learning_rate": 0.0001925936564166529,
1276
+ "loss": 2.238,
1277
+ "step": 180
1278
+ },
1279
+ {
1280
+ "epoch": 0.035493675850573586,
1281
+ "grad_norm": 13.620756149291992,
1282
+ "learning_rate": 0.00019250737459948405,
1283
+ "loss": 1.5966,
1284
+ "step": 181
1285
+ },
1286
+ {
1287
+ "epoch": 0.03568977350720659,
1288
+ "grad_norm": 10.609662055969238,
1289
+ "learning_rate": 0.00019242061266498675,
1290
+ "loss": 1.081,
1291
+ "step": 182
1292
+ },
1293
+ {
1294
+ "epoch": 0.035885871163839594,
1295
+ "grad_norm": 8.404073715209961,
1296
+ "learning_rate": 0.00019233337106345925,
1297
+ "loss": 1.849,
1298
+ "step": 183
1299
+ },
1300
+ {
1301
+ "epoch": 0.0360819688204726,
1302
+ "grad_norm": 5.560455322265625,
1303
+ "learning_rate": 0.00019224565024768926,
1304
+ "loss": 1.4533,
1305
+ "step": 184
1306
+ },
1307
+ {
1308
+ "epoch": 0.036278066477105596,
1309
+ "grad_norm": 7.896220684051514,
1310
+ "learning_rate": 0.00019215745067295169,
1311
+ "loss": 2.482,
1312
+ "step": 185
1313
+ },
1314
+ {
1315
+ "epoch": 0.0364741641337386,
1316
+ "grad_norm": 9.554024696350098,
1317
+ "learning_rate": 0.00019206877279700612,
1318
+ "loss": 1.9367,
1319
+ "step": 186
1320
+ },
1321
+ {
1322
+ "epoch": 0.036670261790371604,
1323
+ "grad_norm": 3.333113193511963,
1324
+ "learning_rate": 0.00019197961708009473,
1325
+ "loss": 1.1477,
1326
+ "step": 187
1327
+ },
1328
+ {
1329
+ "epoch": 0.03686635944700461,
1330
+ "grad_norm": 9.468240737915039,
1331
+ "learning_rate": 0.00019188998398493953,
1332
+ "loss": 1.0849,
1333
+ "step": 188
1334
+ },
1335
+ {
1336
+ "epoch": 0.03706245710363761,
1337
+ "grad_norm": 10.807921409606934,
1338
+ "learning_rate": 0.00019179987397674022,
1339
+ "loss": 2.0192,
1340
+ "step": 189
1341
+ },
1342
+ {
1343
+ "epoch": 0.03725855476027062,
1344
+ "grad_norm": 7.14724588394165,
1345
+ "learning_rate": 0.0001917092875231717,
1346
+ "loss": 2.1502,
1347
+ "step": 190
1348
+ },
1349
+ {
1350
+ "epoch": 0.03745465241690362,
1351
+ "grad_norm": 12.262707710266113,
1352
+ "learning_rate": 0.00019161822509438162,
1353
+ "loss": 2.423,
1354
+ "step": 191
1355
+ },
1356
+ {
1357
+ "epoch": 0.03765075007353662,
1358
+ "grad_norm": 35.0489387512207,
1359
+ "learning_rate": 0.000191526687162988,
1360
+ "loss": 2.5959,
1361
+ "step": 192
1362
+ },
1363
+ {
1364
+ "epoch": 0.03784684773016962,
1365
+ "grad_norm": 6.615735054016113,
1366
+ "learning_rate": 0.0001914346742040767,
1367
+ "loss": 1.7733,
1368
+ "step": 193
1369
+ },
1370
+ {
1371
+ "epoch": 0.038042945386802626,
1372
+ "grad_norm": 4.537426471710205,
1373
+ "learning_rate": 0.00019134218669519896,
1374
+ "loss": 1.0028,
1375
+ "step": 194
1376
+ },
1377
+ {
1378
+ "epoch": 0.03823904304343563,
1379
+ "grad_norm": 5.247801303863525,
1380
+ "learning_rate": 0.00019124922511636912,
1381
+ "loss": 0.8412,
1382
+ "step": 195
1383
+ },
1384
+ {
1385
+ "epoch": 0.038435140700068635,
1386
+ "grad_norm": 6.2183918952941895,
1387
+ "learning_rate": 0.00019115578995006173,
1388
+ "loss": 1.7212,
1389
+ "step": 196
1390
+ },
1391
+ {
1392
+ "epoch": 0.03863123835670164,
1393
+ "grad_norm": 9.330825805664062,
1394
+ "learning_rate": 0.00019106188168120948,
1395
+ "loss": 1.5341,
1396
+ "step": 197
1397
+ },
1398
+ {
1399
+ "epoch": 0.03882733601333464,
1400
+ "grad_norm": 9.86260986328125,
1401
+ "learning_rate": 0.00019096750079720037,
1402
+ "loss": 2.765,
1403
+ "step": 198
1404
+ },
1405
+ {
1406
+ "epoch": 0.03902343366996765,
1407
+ "grad_norm": 10.341052055358887,
1408
+ "learning_rate": 0.00019087264778787534,
1409
+ "loss": 1.9024,
1410
+ "step": 199
1411
+ },
1412
+ {
1413
+ "epoch": 0.039219531326600644,
1414
+ "grad_norm": 9.549159049987793,
1415
+ "learning_rate": 0.00019077732314552566,
1416
+ "loss": 1.2644,
1417
+ "step": 200
1418
+ },
1419
+ {
1420
+ "epoch": 0.03941562898323365,
1421
+ "grad_norm": 5.25094747543335,
1422
+ "learning_rate": 0.00019068152736489036,
1423
+ "loss": 1.334,
1424
+ "step": 201
1425
+ },
1426
+ {
1427
+ "epoch": 0.03961172663986665,
1428
+ "grad_norm": 7.197662830352783,
1429
+ "learning_rate": 0.00019058526094315378,
1430
+ "loss": 1.9093,
1431
+ "step": 202
1432
+ },
1433
+ {
1434
+ "epoch": 0.03980782429649966,
1435
+ "grad_norm": 8.476766586303711,
1436
+ "learning_rate": 0.0001904885243799429,
1437
+ "loss": 1.477,
1438
+ "step": 203
1439
+ },
1440
+ {
1441
+ "epoch": 0.04000392195313266,
1442
+ "grad_norm": 8.232537269592285,
1443
+ "learning_rate": 0.00019039131817732462,
1444
+ "loss": 1.4013,
1445
+ "step": 204
1446
+ },
1447
+ {
1448
+ "epoch": 0.040200019609765665,
1449
+ "grad_norm": 15.687997817993164,
1450
+ "learning_rate": 0.0001902936428398035,
1451
+ "loss": 1.6772,
1452
+ "step": 205
1453
+ },
1454
+ {
1455
+ "epoch": 0.04039611726639867,
1456
+ "grad_norm": 7.573246479034424,
1457
+ "learning_rate": 0.00019019549887431877,
1458
+ "loss": 1.5007,
1459
+ "step": 206
1460
+ },
1461
+ {
1462
+ "epoch": 0.040592214923031666,
1463
+ "grad_norm": 11.531679153442383,
1464
+ "learning_rate": 0.0001900968867902419,
1465
+ "loss": 2.6798,
1466
+ "step": 207
1467
+ },
1468
+ {
1469
+ "epoch": 0.04078831257966467,
1470
+ "grad_norm": 6.225399494171143,
1471
+ "learning_rate": 0.00018999780709937398,
1472
+ "loss": 1.3078,
1473
+ "step": 208
1474
+ },
1475
+ {
1476
+ "epoch": 0.040984410236297675,
1477
+ "grad_norm": 10.358306884765625,
1478
+ "learning_rate": 0.0001898982603159429,
1479
+ "loss": 1.7353,
1480
+ "step": 209
1481
+ },
1482
+ {
1483
+ "epoch": 0.04118050789293068,
1484
+ "grad_norm": 8.146821975708008,
1485
+ "learning_rate": 0.00018979824695660087,
1486
+ "loss": 1.415,
1487
+ "step": 210
1488
+ },
1489
+ {
1490
+ "epoch": 0.04137660554956368,
1491
+ "grad_norm": 4.390834808349609,
1492
+ "learning_rate": 0.00018969776754042156,
1493
+ "loss": 1.7612,
1494
+ "step": 211
1495
+ },
1496
+ {
1497
+ "epoch": 0.04157270320619669,
1498
+ "grad_norm": 7.958174228668213,
1499
+ "learning_rate": 0.0001895968225888976,
1500
+ "loss": 2.6614,
1501
+ "step": 212
1502
+ },
1503
+ {
1504
+ "epoch": 0.04176880086282969,
1505
+ "grad_norm": 9.981225967407227,
1506
+ "learning_rate": 0.00018949541262593762,
1507
+ "loss": 2.0158,
1508
+ "step": 213
1509
+ },
1510
+ {
1511
+ "epoch": 0.041964898519462696,
1512
+ "grad_norm": 4.456605911254883,
1513
+ "learning_rate": 0.00018939353817786387,
1514
+ "loss": 1.0621,
1515
+ "step": 214
1516
+ },
1517
+ {
1518
+ "epoch": 0.04216099617609569,
1519
+ "grad_norm": 7.546274662017822,
1520
+ "learning_rate": 0.00018929119977340917,
1521
+ "loss": 1.7333,
1522
+ "step": 215
1523
+ },
1524
+ {
1525
+ "epoch": 0.0423570938327287,
1526
+ "grad_norm": 11.629569053649902,
1527
+ "learning_rate": 0.0001891883979437143,
1528
+ "loss": 1.4268,
1529
+ "step": 216
1530
+ },
1531
+ {
1532
+ "epoch": 0.0425531914893617,
1533
+ "grad_norm": 17.710948944091797,
1534
+ "learning_rate": 0.00018908513322232528,
1535
+ "loss": 2.8701,
1536
+ "step": 217
1537
+ },
1538
+ {
1539
+ "epoch": 0.042749289145994705,
1540
+ "grad_norm": 6.267049789428711,
1541
+ "learning_rate": 0.00018898140614519054,
1542
+ "loss": 1.6313,
1543
+ "step": 218
1544
+ },
1545
+ {
1546
+ "epoch": 0.04294538680262771,
1547
+ "grad_norm": 4.971591949462891,
1548
+ "learning_rate": 0.00018887721725065814,
1549
+ "loss": 2.0962,
1550
+ "step": 219
1551
+ },
1552
+ {
1553
+ "epoch": 0.043141484459260714,
1554
+ "grad_norm": 5.603585243225098,
1555
+ "learning_rate": 0.00018877256707947306,
1556
+ "loss": 0.6683,
1557
+ "step": 220
1558
+ },
1559
+ {
1560
+ "epoch": 0.04333758211589372,
1561
+ "grad_norm": 6.029137134552002,
1562
+ "learning_rate": 0.00018866745617477423,
1563
+ "loss": 1.5375,
1564
+ "step": 221
1565
+ },
1566
+ {
1567
+ "epoch": 0.043533679772526715,
1568
+ "grad_norm": 7.4105143547058105,
1569
+ "learning_rate": 0.00018856188508209183,
1570
+ "loss": 1.9524,
1571
+ "step": 222
1572
+ },
1573
+ {
1574
+ "epoch": 0.04372977742915972,
1575
+ "grad_norm": 8.321500778198242,
1576
+ "learning_rate": 0.00018845585434934452,
1577
+ "loss": 2.1109,
1578
+ "step": 223
1579
+ },
1580
+ {
1581
+ "epoch": 0.04392587508579272,
1582
+ "grad_norm": 9.238992691040039,
1583
+ "learning_rate": 0.00018834936452683638,
1584
+ "loss": 1.4247,
1585
+ "step": 224
1586
+ },
1587
+ {
1588
+ "epoch": 0.04412197274242573,
1589
+ "grad_norm": 5.125700950622559,
1590
+ "learning_rate": 0.00018824241616725434,
1591
+ "loss": 1.1266,
1592
+ "step": 225
1593
+ },
1594
+ {
1595
+ "epoch": 0.04431807039905873,
1596
+ "grad_norm": 7.538069725036621,
1597
+ "learning_rate": 0.000188135009825665,
1598
+ "loss": 2.1554,
1599
+ "step": 226
1600
+ },
1601
+ {
1602
+ "epoch": 0.044514168055691736,
1603
+ "grad_norm": 8.309137344360352,
1604
+ "learning_rate": 0.00018802714605951199,
1605
+ "loss": 1.1435,
1606
+ "step": 227
1607
+ },
1608
+ {
1609
+ "epoch": 0.04471026571232474,
1610
+ "grad_norm": 22.02942657470703,
1611
+ "learning_rate": 0.00018791882542861302,
1612
+ "loss": 1.8154,
1613
+ "step": 228
1614
+ },
1615
+ {
1616
+ "epoch": 0.044906363368957744,
1617
+ "grad_norm": 7.017299652099609,
1618
+ "learning_rate": 0.0001878100484951569,
1619
+ "loss": 1.4998,
1620
+ "step": 229
1621
+ },
1622
+ {
1623
+ "epoch": 0.04510246102559074,
1624
+ "grad_norm": 18.39406394958496,
1625
+ "learning_rate": 0.00018770081582370068,
1626
+ "loss": 2.1662,
1627
+ "step": 230
1628
+ },
1629
+ {
1630
+ "epoch": 0.045298558682223745,
1631
+ "grad_norm": 9.11802864074707,
1632
+ "learning_rate": 0.0001875911279811667,
1633
+ "loss": 0.7446,
1634
+ "step": 231
1635
+ },
1636
+ {
1637
+ "epoch": 0.04549465633885675,
1638
+ "grad_norm": 7.193735122680664,
1639
+ "learning_rate": 0.00018748098553683968,
1640
+ "loss": 1.9472,
1641
+ "step": 232
1642
+ },
1643
+ {
1644
+ "epoch": 0.045690753995489754,
1645
+ "grad_norm": 23.407245635986328,
1646
+ "learning_rate": 0.0001873703890623637,
1647
+ "loss": 2.1782,
1648
+ "step": 233
1649
+ },
1650
+ {
1651
+ "epoch": 0.04588685165212276,
1652
+ "grad_norm": 6.547053813934326,
1653
+ "learning_rate": 0.00018725933913173938,
1654
+ "loss": 1.9687,
1655
+ "step": 234
1656
+ },
1657
+ {
1658
+ "epoch": 0.04608294930875576,
1659
+ "grad_norm": 10.576699256896973,
1660
+ "learning_rate": 0.00018714783632132068,
1661
+ "loss": 1.8832,
1662
+ "step": 235
1663
+ },
1664
+ {
1665
+ "epoch": 0.046279046965388766,
1666
+ "grad_norm": 5.852027416229248,
1667
+ "learning_rate": 0.00018703588120981207,
1668
+ "loss": 1.8932,
1669
+ "step": 236
1670
+ },
1671
+ {
1672
+ "epoch": 0.04647514462202176,
1673
+ "grad_norm": 7.023755073547363,
1674
+ "learning_rate": 0.00018692347437826548,
1675
+ "loss": 3.7953,
1676
+ "step": 237
1677
+ },
1678
+ {
1679
+ "epoch": 0.04667124227865477,
1680
+ "grad_norm": 13.61612606048584,
1681
+ "learning_rate": 0.00018681061641007737,
1682
+ "loss": 1.9077,
1683
+ "step": 238
1684
+ },
1685
+ {
1686
+ "epoch": 0.04686733993528777,
1687
+ "grad_norm": 5.3344526290893555,
1688
+ "learning_rate": 0.0001866973078909854,
1689
+ "loss": 1.4342,
1690
+ "step": 239
1691
+ },
1692
+ {
1693
+ "epoch": 0.047063437591920776,
1694
+ "grad_norm": 38.80408477783203,
1695
+ "learning_rate": 0.00018658354940906586,
1696
+ "loss": 2.3665,
1697
+ "step": 240
1698
+ },
1699
+ {
1700
+ "epoch": 0.04725953524855378,
1701
+ "grad_norm": 9.670344352722168,
1702
+ "learning_rate": 0.00018646934155473022,
1703
+ "loss": 0.9006,
1704
+ "step": 241
1705
+ },
1706
+ {
1707
+ "epoch": 0.047455632905186784,
1708
+ "grad_norm": 5.1102495193481445,
1709
+ "learning_rate": 0.00018635468492072228,
1710
+ "loss": 1.2289,
1711
+ "step": 242
1712
+ },
1713
+ {
1714
+ "epoch": 0.04765173056181979,
1715
+ "grad_norm": 9.1209077835083,
1716
+ "learning_rate": 0.00018623958010211493,
1717
+ "loss": 1.6009,
1718
+ "step": 243
1719
+ },
1720
+ {
1721
+ "epoch": 0.04784782821845279,
1722
+ "grad_norm": 16.793027877807617,
1723
+ "learning_rate": 0.0001861240276963073,
1724
+ "loss": 0.94,
1725
+ "step": 244
1726
+ },
1727
+ {
1728
+ "epoch": 0.04804392587508579,
1729
+ "grad_norm": 6.90054988861084,
1730
+ "learning_rate": 0.00018600802830302134,
1731
+ "loss": 1.559,
1732
+ "step": 245
1733
+ },
1734
+ {
1735
+ "epoch": 0.048240023531718794,
1736
+ "grad_norm": 13.111268043518066,
1737
+ "learning_rate": 0.0001858915825242991,
1738
+ "loss": 2.1186,
1739
+ "step": 246
1740
+ },
1741
+ {
1742
+ "epoch": 0.0484361211883518,
1743
+ "grad_norm": 6.356579780578613,
1744
+ "learning_rate": 0.00018577469096449925,
1745
+ "loss": 1.6653,
1746
+ "step": 247
1747
+ },
1748
+ {
1749
+ "epoch": 0.0486322188449848,
1750
+ "grad_norm": 9.505541801452637,
1751
+ "learning_rate": 0.00018565735423029404,
1752
+ "loss": 0.9774,
1753
+ "step": 248
1754
+ },
1755
+ {
1756
+ "epoch": 0.048828316501617806,
1757
+ "grad_norm": 8.927581787109375,
1758
+ "learning_rate": 0.00018553957293066632,
1759
+ "loss": 2.6455,
1760
+ "step": 249
1761
+ },
1762
+ {
1763
+ "epoch": 0.04902441415825081,
1764
+ "grad_norm": 7.568793773651123,
1765
+ "learning_rate": 0.00018542134767690616,
1766
+ "loss": 1.1464,
1767
+ "step": 250
1768
+ },
1769
+ {
1770
+ "epoch": 0.049220511814883815,
1771
+ "grad_norm": 7.632232189178467,
1772
+ "learning_rate": 0.00018530267908260784,
1773
+ "loss": 1.2671,
1774
+ "step": 251
1775
+ },
1776
+ {
1777
+ "epoch": 0.04941660947151681,
1778
+ "grad_norm": 4.4279561042785645,
1779
+ "learning_rate": 0.00018518356776366657,
1780
+ "loss": 2.0384,
1781
+ "step": 252
1782
+ },
1783
+ {
1784
+ "epoch": 0.049612707128149816,
1785
+ "grad_norm": 10.818602561950684,
1786
+ "learning_rate": 0.00018506401433827528,
1787
+ "loss": 1.0559,
1788
+ "step": 253
1789
+ },
1790
+ {
1791
+ "epoch": 0.04980880478478282,
1792
+ "grad_norm": 5.57148551940918,
1793
+ "learning_rate": 0.00018494401942692153,
1794
+ "loss": 0.9603,
1795
+ "step": 254
1796
+ },
1797
+ {
1798
+ "epoch": 0.050004902441415824,
1799
+ "grad_norm": 11.1985502243042,
1800
+ "learning_rate": 0.00018482358365238413,
1801
+ "loss": 2.4928,
1802
+ "step": 255
1803
+ },
1804
+ {
1805
+ "epoch": 0.05020100009804883,
1806
+ "grad_norm": 4.890799522399902,
1807
+ "learning_rate": 0.00018470270763973004,
1808
+ "loss": 1.4034,
1809
+ "step": 256
1810
+ },
1811
+ {
1812
+ "epoch": 0.05039709775468183,
1813
+ "grad_norm": 6.2078680992126465,
1814
+ "learning_rate": 0.00018458139201631108,
1815
+ "loss": 1.782,
1816
+ "step": 257
1817
+ },
1818
+ {
1819
+ "epoch": 0.05059319541131484,
1820
+ "grad_norm": 24.89278221130371,
1821
+ "learning_rate": 0.00018445963741176065,
1822
+ "loss": 3.7879,
1823
+ "step": 258
1824
+ },
1825
+ {
1826
+ "epoch": 0.05078929306794784,
1827
+ "grad_norm": 5.363570213317871,
1828
+ "learning_rate": 0.00018433744445799045,
1829
+ "loss": 1.4292,
1830
+ "step": 259
1831
+ },
1832
+ {
1833
+ "epoch": 0.05098539072458084,
1834
+ "grad_norm": 7.669764041900635,
1835
+ "learning_rate": 0.0001842148137891873,
1836
+ "loss": 2.0483,
1837
+ "step": 260
1838
+ },
1839
+ {
1840
+ "epoch": 0.05118148838121384,
1841
+ "grad_norm": 5.229150295257568,
1842
+ "learning_rate": 0.00018409174604180976,
1843
+ "loss": 3.2863,
1844
+ "step": 261
1845
+ },
1846
+ {
1847
+ "epoch": 0.05137758603784685,
1848
+ "grad_norm": 5.850373268127441,
1849
+ "learning_rate": 0.0001839682418545848,
1850
+ "loss": 1.8197,
1851
+ "step": 262
1852
+ },
1853
+ {
1854
+ "epoch": 0.05157368369447985,
1855
+ "grad_norm": 7.138283729553223,
1856
+ "learning_rate": 0.00018384430186850454,
1857
+ "loss": 2.7101,
1858
+ "step": 263
1859
+ },
1860
+ {
1861
+ "epoch": 0.051769781351112855,
1862
+ "grad_norm": 10.918169975280762,
1863
+ "learning_rate": 0.000183719926726823,
1864
+ "loss": 1.8243,
1865
+ "step": 264
1866
+ },
1867
+ {
1868
+ "epoch": 0.05196587900774586,
1869
+ "grad_norm": 9.205517768859863,
1870
+ "learning_rate": 0.00018359511707505258,
1871
+ "loss": 1.4992,
1872
+ "step": 265
1873
+ },
1874
+ {
1875
+ "epoch": 0.05216197666437886,
1876
+ "grad_norm": 8.567139625549316,
1877
+ "learning_rate": 0.00018346987356096086,
1878
+ "loss": 1.051,
1879
+ "step": 266
1880
+ },
1881
+ {
1882
+ "epoch": 0.05235807432101187,
1883
+ "grad_norm": 10.313075065612793,
1884
+ "learning_rate": 0.00018334419683456717,
1885
+ "loss": 2.6062,
1886
+ "step": 267
1887
+ },
1888
+ {
1889
+ "epoch": 0.052554171977644865,
1890
+ "grad_norm": 7.515801906585693,
1891
+ "learning_rate": 0.0001832180875481392,
1892
+ "loss": 1.266,
1893
+ "step": 268
1894
+ },
1895
+ {
1896
+ "epoch": 0.05275026963427787,
1897
+ "grad_norm": 5.345809459686279,
1898
+ "learning_rate": 0.00018309154635618965,
1899
+ "loss": 1.2526,
1900
+ "step": 269
1901
+ },
1902
+ {
1903
+ "epoch": 0.05294636729091087,
1904
+ "grad_norm": 13.568882942199707,
1905
+ "learning_rate": 0.00018296457391547296,
1906
+ "loss": 2.5183,
1907
+ "step": 270
1908
+ },
1909
+ {
1910
+ "epoch": 0.05314246494754388,
1911
+ "grad_norm": 10.022235870361328,
1912
+ "learning_rate": 0.00018283717088498155,
1913
+ "loss": 2.2774,
1914
+ "step": 271
1915
+ },
1916
+ {
1917
+ "epoch": 0.05333856260417688,
1918
+ "grad_norm": 6.537176132202148,
1919
+ "learning_rate": 0.0001827093379259428,
1920
+ "loss": 1.4989,
1921
+ "step": 272
1922
+ },
1923
+ {
1924
+ "epoch": 0.053534660260809885,
1925
+ "grad_norm": 17.213987350463867,
1926
+ "learning_rate": 0.00018258107570181533,
1927
+ "loss": 2.4885,
1928
+ "step": 273
1929
+ },
1930
+ {
1931
+ "epoch": 0.05373075791744289,
1932
+ "grad_norm": 6.48647403717041,
1933
+ "learning_rate": 0.00018245238487828573,
1934
+ "loss": 1.2309,
1935
+ "step": 274
1936
+ },
1937
+ {
1938
+ "epoch": 0.05392685557407589,
1939
+ "grad_norm": 5.479822158813477,
1940
+ "learning_rate": 0.000182323266123265,
1941
+ "loss": 1.8959,
1942
+ "step": 275
1943
+ },
1944
+ {
1945
+ "epoch": 0.05412295323070889,
1946
+ "grad_norm": 7.716124534606934,
1947
+ "learning_rate": 0.00018219372010688515,
1948
+ "loss": 1.8321,
1949
+ "step": 276
1950
+ },
1951
+ {
1952
+ "epoch": 0.054319050887341895,
1953
+ "grad_norm": 9.968965530395508,
1954
+ "learning_rate": 0.00018206374750149567,
1955
+ "loss": 4.1652,
1956
+ "step": 277
1957
+ },
1958
+ {
1959
+ "epoch": 0.0545151485439749,
1960
+ "grad_norm": 6.009235382080078,
1961
+ "learning_rate": 0.00018193334898166007,
1962
+ "loss": 0.8178,
1963
+ "step": 278
1964
+ },
1965
+ {
1966
+ "epoch": 0.0547112462006079,
1967
+ "grad_norm": 8.031886100769043,
1968
+ "learning_rate": 0.00018180252522415242,
1969
+ "loss": 1.783,
1970
+ "step": 279
1971
+ },
1972
+ {
1973
+ "epoch": 0.05490734385724091,
1974
+ "grad_norm": 5.5589680671691895,
1975
+ "learning_rate": 0.00018167127690795368,
1976
+ "loss": 1.3049,
1977
+ "step": 280
1978
+ },
1979
+ {
1980
+ "epoch": 0.05510344151387391,
1981
+ "grad_norm": 5.04995059967041,
1982
+ "learning_rate": 0.0001815396047142485,
1983
+ "loss": 0.8962,
1984
+ "step": 281
1985
+ },
1986
+ {
1987
+ "epoch": 0.055299539170506916,
1988
+ "grad_norm": 5.3526692390441895,
1989
+ "learning_rate": 0.0001814075093264212,
1990
+ "loss": 1.201,
1991
+ "step": 282
1992
+ },
1993
+ {
1994
+ "epoch": 0.05549563682713991,
1995
+ "grad_norm": 11.980429649353027,
1996
+ "learning_rate": 0.00018127499143005268,
1997
+ "loss": 0.6955,
1998
+ "step": 283
1999
+ },
2000
+ {
2001
+ "epoch": 0.05569173448377292,
2002
+ "grad_norm": 38.28229904174805,
2003
+ "learning_rate": 0.00018114205171291663,
2004
+ "loss": 1.7335,
2005
+ "step": 284
2006
+ },
2007
+ {
2008
+ "epoch": 0.05588783214040592,
2009
+ "grad_norm": 6.15138053894043,
2010
+ "learning_rate": 0.000181008690864976,
2011
+ "loss": 1.2766,
2012
+ "step": 285
2013
+ },
2014
+ {
2015
+ "epoch": 0.056083929797038926,
2016
+ "grad_norm": 7.846836566925049,
2017
+ "learning_rate": 0.00018087490957837944,
2018
+ "loss": 1.155,
2019
+ "step": 286
2020
+ },
2021
+ {
2022
+ "epoch": 0.05628002745367193,
2023
+ "grad_norm": 7.675628185272217,
2024
+ "learning_rate": 0.00018074070854745772,
2025
+ "loss": 1.6129,
2026
+ "step": 287
2027
+ },
2028
+ {
2029
+ "epoch": 0.056476125110304934,
2030
+ "grad_norm": 12.245649337768555,
2031
+ "learning_rate": 0.00018060608846872005,
2032
+ "loss": 1.7585,
2033
+ "step": 288
2034
+ },
2035
+ {
2036
+ "epoch": 0.05667222276693794,
2037
+ "grad_norm": 10.520101547241211,
2038
+ "learning_rate": 0.00018047105004085053,
2039
+ "loss": 1.9265,
2040
+ "step": 289
2041
+ },
2042
+ {
2043
+ "epoch": 0.056868320423570935,
2044
+ "grad_norm": 7.400151252746582,
2045
+ "learning_rate": 0.00018033559396470454,
2046
+ "loss": 1.4189,
2047
+ "step": 290
2048
+ },
2049
+ {
2050
+ "epoch": 0.05706441808020394,
2051
+ "grad_norm": 12.058060646057129,
2052
+ "learning_rate": 0.00018019972094330503,
2053
+ "loss": 2.3312,
2054
+ "step": 291
2055
+ },
2056
+ {
2057
+ "epoch": 0.057260515736836944,
2058
+ "grad_norm": 5.313794136047363,
2059
+ "learning_rate": 0.00018006343168183893,
2060
+ "loss": 2.0051,
2061
+ "step": 292
2062
+ },
2063
+ {
2064
+ "epoch": 0.05745661339346995,
2065
+ "grad_norm": 11.182997703552246,
2066
+ "learning_rate": 0.0001799267268876535,
2067
+ "loss": 1.4779,
2068
+ "step": 293
2069
+ },
2070
+ {
2071
+ "epoch": 0.05765271105010295,
2072
+ "grad_norm": 16.24866485595703,
2073
+ "learning_rate": 0.0001797896072702526,
2074
+ "loss": 2.4689,
2075
+ "step": 294
2076
+ },
2077
+ {
2078
+ "epoch": 0.057848808706735956,
2079
+ "grad_norm": 7.471411228179932,
2080
+ "learning_rate": 0.00017965207354129307,
2081
+ "loss": 3.0599,
2082
+ "step": 295
2083
+ },
2084
+ {
2085
+ "epoch": 0.05804490636336896,
2086
+ "grad_norm": 7.715878486633301,
2087
+ "learning_rate": 0.00017951412641458098,
2088
+ "loss": 0.8256,
2089
+ "step": 296
2090
+ },
2091
+ {
2092
+ "epoch": 0.058241004020001964,
2093
+ "grad_norm": 22.084482192993164,
2094
+ "learning_rate": 0.000179375766606068,
2095
+ "loss": 2.457,
2096
+ "step": 297
2097
+ },
2098
+ {
2099
+ "epoch": 0.05843710167663496,
2100
+ "grad_norm": 8.041847229003906,
2101
+ "learning_rate": 0.00017923699483384753,
2102
+ "loss": 1.5642,
2103
+ "step": 298
2104
+ },
2105
+ {
2106
+ "epoch": 0.058633199333267966,
2107
+ "grad_norm": 12.814888000488281,
2108
+ "learning_rate": 0.00017909781181815117,
2109
+ "loss": 1.5129,
2110
+ "step": 299
2111
+ },
2112
+ {
2113
+ "epoch": 0.05882929698990097,
2114
+ "grad_norm": 9.216371536254883,
2115
+ "learning_rate": 0.0001789582182813449,
2116
+ "loss": 2.0632,
2117
+ "step": 300
2118
+ },
2119
+ {
2120
+ "epoch": 0.059025394646533974,
2121
+ "grad_norm": 12.80371379852295,
2122
+ "learning_rate": 0.00017881821494792528,
2123
+ "loss": 2.8705,
2124
+ "step": 301
2125
+ },
2126
+ {
2127
+ "epoch": 0.05922149230316698,
2128
+ "grad_norm": 7.234943389892578,
2129
+ "learning_rate": 0.00017867780254451576,
2130
+ "loss": 2.6664,
2131
+ "step": 302
2132
+ },
2133
+ {
2134
+ "epoch": 0.05941758995979998,
2135
+ "grad_norm": 11.168726921081543,
2136
+ "learning_rate": 0.00017853698179986282,
2137
+ "loss": 1.347,
2138
+ "step": 303
2139
+ },
2140
+ {
2141
+ "epoch": 0.059613687616432987,
2142
+ "grad_norm": 19.369266510009766,
2143
+ "learning_rate": 0.00017839575344483238,
2144
+ "loss": 2.68,
2145
+ "step": 304
2146
+ },
2147
+ {
2148
+ "epoch": 0.059809785273065984,
2149
+ "grad_norm": 7.1730570793151855,
2150
+ "learning_rate": 0.0001782541182124057,
2151
+ "loss": 2.3908,
2152
+ "step": 305
2153
+ },
2154
+ {
2155
+ "epoch": 0.06000588292969899,
2156
+ "grad_norm": 7.243929862976074,
2157
+ "learning_rate": 0.0001781120768376759,
2158
+ "loss": 1.0056,
2159
+ "step": 306
2160
+ },
2161
+ {
2162
+ "epoch": 0.06020198058633199,
2163
+ "grad_norm": 7.748988628387451,
2164
+ "learning_rate": 0.00017796963005784394,
2165
+ "loss": 2.1776,
2166
+ "step": 307
2167
+ },
2168
+ {
2169
+ "epoch": 0.060398078242964996,
2170
+ "grad_norm": 13.446945190429688,
2171
+ "learning_rate": 0.0001778267786122148,
2172
+ "loss": 2.3275,
2173
+ "step": 308
2174
+ },
2175
+ {
2176
+ "epoch": 0.060594175899598,
2177
+ "grad_norm": 10.720627784729004,
2178
+ "learning_rate": 0.0001776835232421938,
2179
+ "loss": 1.046,
2180
+ "step": 309
2181
+ },
2182
+ {
2183
+ "epoch": 0.060790273556231005,
2184
+ "grad_norm": 11.274985313415527,
2185
+ "learning_rate": 0.00017753986469128257,
2186
+ "loss": 2.4269,
2187
+ "step": 310
2188
+ },
2189
+ {
2190
+ "epoch": 0.06098637121286401,
2191
+ "grad_norm": 8.671335220336914,
2192
+ "learning_rate": 0.00017739580370507532,
2193
+ "loss": 2.1488,
2194
+ "step": 311
2195
+ },
2196
+ {
2197
+ "epoch": 0.06118246886949701,
2198
+ "grad_norm": 8.375978469848633,
2199
+ "learning_rate": 0.0001772513410312548,
2200
+ "loss": 1.8458,
2201
+ "step": 312
2202
+ },
2203
+ {
2204
+ "epoch": 0.06137856652613001,
2205
+ "grad_norm": 11.178112983703613,
2206
+ "learning_rate": 0.00017710647741958868,
2207
+ "loss": 2.7169,
2208
+ "step": 313
2209
+ },
2210
+ {
2211
+ "epoch": 0.061574664182763014,
2212
+ "grad_norm": 8.29799747467041,
2213
+ "learning_rate": 0.00017696121362192544,
2214
+ "loss": 1.455,
2215
+ "step": 314
2216
+ },
2217
+ {
2218
+ "epoch": 0.06177076183939602,
2219
+ "grad_norm": 6.712766647338867,
2220
+ "learning_rate": 0.00017681555039219054,
2221
+ "loss": 1.2604,
2222
+ "step": 315
2223
+ },
2224
+ {
2225
+ "epoch": 0.06196685949602902,
2226
+ "grad_norm": 7.891608238220215,
2227
+ "learning_rate": 0.00017666948848638257,
2228
+ "loss": 2.1795,
2229
+ "step": 316
2230
+ },
2231
+ {
2232
+ "epoch": 0.06216295715266203,
2233
+ "grad_norm": 5.039219379425049,
2234
+ "learning_rate": 0.00017652302866256916,
2235
+ "loss": 0.9069,
2236
+ "step": 317
2237
+ },
2238
+ {
2239
+ "epoch": 0.06235905480929503,
2240
+ "grad_norm": 9.421103477478027,
2241
+ "learning_rate": 0.00017637617168088325,
2242
+ "loss": 2.4256,
2243
+ "step": 318
2244
+ },
2245
+ {
2246
+ "epoch": 0.06255515246592804,
2247
+ "grad_norm": 4.435902118682861,
2248
+ "learning_rate": 0.000176228918303519,
2249
+ "loss": 1.9269,
2250
+ "step": 319
2251
+ },
2252
+ {
2253
+ "epoch": 0.06275125012256104,
2254
+ "grad_norm": 10.938987731933594,
2255
+ "learning_rate": 0.00017608126929472795,
2256
+ "loss": 1.4649,
2257
+ "step": 320
2258
+ },
2259
+ {
2260
+ "epoch": 0.06294734777919404,
2261
+ "grad_norm": 6.332970142364502,
2262
+ "learning_rate": 0.00017593322542081485,
2263
+ "loss": 2.0089,
2264
+ "step": 321
2265
+ },
2266
+ {
2267
+ "epoch": 0.06314344543582705,
2268
+ "grad_norm": 6.731532573699951,
2269
+ "learning_rate": 0.00017578478745013392,
2270
+ "loss": 2.4046,
2271
+ "step": 322
2272
+ },
2273
+ {
2274
+ "epoch": 0.06333954309246005,
2275
+ "grad_norm": 8.772012710571289,
2276
+ "learning_rate": 0.00017563595615308474,
2277
+ "loss": 1.4935,
2278
+ "step": 323
2279
+ },
2280
+ {
2281
+ "epoch": 0.06353564074909304,
2282
+ "grad_norm": 5.693745136260986,
2283
+ "learning_rate": 0.00017548673230210823,
2284
+ "loss": 1.848,
2285
+ "step": 324
2286
+ },
2287
+ {
2288
+ "epoch": 0.06373173840572605,
2289
+ "grad_norm": 15.056157112121582,
2290
+ "learning_rate": 0.0001753371166716828,
2291
+ "loss": 1.4598,
2292
+ "step": 325
2293
+ },
2294
+ {
2295
+ "epoch": 0.06392783606235905,
2296
+ "grad_norm": 9.370506286621094,
2297
+ "learning_rate": 0.00017518711003832002,
2298
+ "loss": 1.4809,
2299
+ "step": 326
2300
+ },
2301
+ {
2302
+ "epoch": 0.06412393371899205,
2303
+ "grad_norm": 19.398839950561523,
2304
+ "learning_rate": 0.000175036713180561,
2305
+ "loss": 1.0093,
2306
+ "step": 327
2307
+ },
2308
+ {
2309
+ "epoch": 0.06432003137562506,
2310
+ "grad_norm": 4.393742084503174,
2311
+ "learning_rate": 0.00017488592687897193,
2312
+ "loss": 0.817,
2313
+ "step": 328
2314
+ },
2315
+ {
2316
+ "epoch": 0.06451612903225806,
2317
+ "grad_norm": 6.7713799476623535,
2318
+ "learning_rate": 0.00017473475191614037,
2319
+ "loss": 2.1701,
2320
+ "step": 329
2321
+ },
2322
+ {
2323
+ "epoch": 0.06471222668889107,
2324
+ "grad_norm": 5.920267581939697,
2325
+ "learning_rate": 0.00017458318907667098,
2326
+ "loss": 3.3491,
2327
+ "step": 330
2328
+ },
2329
+ {
2330
+ "epoch": 0.06490832434552407,
2331
+ "grad_norm": 15.095996856689453,
2332
+ "learning_rate": 0.0001744312391471816,
2333
+ "loss": 1.7637,
2334
+ "step": 331
2335
+ },
2336
+ {
2337
+ "epoch": 0.06510442200215708,
2338
+ "grad_norm": 9.470211029052734,
2339
+ "learning_rate": 0.00017427890291629893,
2340
+ "loss": 2.7744,
2341
+ "step": 332
2342
+ },
2343
+ {
2344
+ "epoch": 0.06530051965879008,
2345
+ "grad_norm": 9.082067489624023,
2346
+ "learning_rate": 0.00017412618117465477,
2347
+ "loss": 3.1791,
2348
+ "step": 333
2349
+ },
2350
+ {
2351
+ "epoch": 0.06549661731542308,
2352
+ "grad_norm": 5.174635410308838,
2353
+ "learning_rate": 0.0001739730747148816,
2354
+ "loss": 1.2189,
2355
+ "step": 334
2356
+ },
2357
+ {
2358
+ "epoch": 0.06569271497205609,
2359
+ "grad_norm": 5.053405284881592,
2360
+ "learning_rate": 0.00017381958433160865,
2361
+ "loss": 1.7119,
2362
+ "step": 335
2363
+ },
2364
+ {
2365
+ "epoch": 0.06588881262868909,
2366
+ "grad_norm": 5.771046161651611,
2367
+ "learning_rate": 0.0001736657108214578,
2368
+ "loss": 1.4188,
2369
+ "step": 336
2370
+ },
2371
+ {
2372
+ "epoch": 0.0660849102853221,
2373
+ "grad_norm": 8.400517463684082,
2374
+ "learning_rate": 0.00017351145498303925,
2375
+ "loss": 2.3167,
2376
+ "step": 337
2377
+ },
2378
+ {
2379
+ "epoch": 0.0662810079419551,
2380
+ "grad_norm": 4.6646728515625,
2381
+ "learning_rate": 0.0001733568176169476,
2382
+ "loss": 1.2102,
2383
+ "step": 338
2384
+ },
2385
+ {
2386
+ "epoch": 0.06647710559858809,
2387
+ "grad_norm": 8.288646697998047,
2388
+ "learning_rate": 0.0001732017995257575,
2389
+ "loss": 2.4803,
2390
+ "step": 339
2391
+ },
2392
+ {
2393
+ "epoch": 0.0666732032552211,
2394
+ "grad_norm": 10.970074653625488,
2395
+ "learning_rate": 0.00017304640151401967,
2396
+ "loss": 2.5839,
2397
+ "step": 340
2398
+ },
2399
+ {
2400
+ "epoch": 0.0668693009118541,
2401
+ "grad_norm": 6.0125732421875,
2402
+ "learning_rate": 0.00017289062438825665,
2403
+ "loss": 1.5807,
2404
+ "step": 341
2405
+ },
2406
+ {
2407
+ "epoch": 0.0670653985684871,
2408
+ "grad_norm": 5.844028472900391,
2409
+ "learning_rate": 0.0001727344689569585,
2410
+ "loss": 3.34,
2411
+ "step": 342
2412
+ },
2413
+ {
2414
+ "epoch": 0.06726149622512011,
2415
+ "grad_norm": 7.1026387214660645,
2416
+ "learning_rate": 0.00017257793603057871,
2417
+ "loss": 1.4347,
2418
+ "step": 343
2419
+ },
2420
+ {
2421
+ "epoch": 0.06745759388175311,
2422
+ "grad_norm": 9.198262214660645,
2423
+ "learning_rate": 0.00017242102642153016,
2424
+ "loss": 1.834,
2425
+ "step": 344
2426
+ },
2427
+ {
2428
+ "epoch": 0.06765369153838612,
2429
+ "grad_norm": 5.76854133605957,
2430
+ "learning_rate": 0.00017226374094418044,
2431
+ "loss": 0.9294,
2432
+ "step": 345
2433
+ },
2434
+ {
2435
+ "epoch": 0.06784978919501912,
2436
+ "grad_norm": 10.319186210632324,
2437
+ "learning_rate": 0.0001721060804148482,
2438
+ "loss": 2.0088,
2439
+ "step": 346
2440
+ },
2441
+ {
2442
+ "epoch": 0.06804588685165212,
2443
+ "grad_norm": 22.298240661621094,
2444
+ "learning_rate": 0.00017194804565179842,
2445
+ "loss": 2.6901,
2446
+ "step": 347
2447
+ },
2448
+ {
2449
+ "epoch": 0.06824198450828513,
2450
+ "grad_norm": 11.38401985168457,
2451
+ "learning_rate": 0.00017178963747523847,
2452
+ "loss": 2.6342,
2453
+ "step": 348
2454
+ },
2455
+ {
2456
+ "epoch": 0.06824198450828513,
2457
+ "eval_loss": 0.4400941729545593,
2458
+ "eval_runtime": 78.7276,
2459
+ "eval_samples_per_second": 27.284,
2460
+ "eval_steps_per_second": 13.642,
2461
+ "step": 348
2462
+ }
2463
+ ],
2464
+ "logging_steps": 1,
2465
+ "max_steps": 1389,
2466
+ "num_input_tokens_seen": 0,
2467
+ "num_train_epochs": 1,
2468
+ "save_steps": 348,
2469
+ "stateful_callbacks": {
2470
+ "TrainerControl": {
2471
+ "args": {
2472
+ "should_epoch_stop": false,
2473
+ "should_evaluate": false,
2474
+ "should_log": false,
2475
+ "should_save": true,
2476
+ "should_training_stop": false
2477
+ },
2478
+ "attributes": {}
2479
+ }
2480
+ },
2481
+ "total_flos": 7.492671941640192e+16,
2482
+ "train_batch_size": 2,
2483
+ "trial_name": null,
2484
+ "trial_params": null
2485
+ }
last-checkpoint/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a0aa4d67a55c695ed6693a9a4569b45bb7fea420dffdcb0312de61969b074ac
3
+ size 6776