yalhessi commited on
Commit
f88d814
·
verified ·
1 Parent(s): 06c01be

Training in progress, epoch 11, checkpoint

Browse files
checkpoint-171259/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: deepseek-ai/deepseek-coder-1.3b-base
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.14.0
checkpoint-171259/adapter_config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "deepseek-ai/deepseek-coder-1.3b-base",
5
+ "bias": "none",
6
+ "eva_config": null,
7
+ "exclude_modules": null,
8
+ "fan_in_fan_out": false,
9
+ "inference_mode": true,
10
+ "init_lora_weights": true,
11
+ "layer_replication": null,
12
+ "layers_pattern": null,
13
+ "layers_to_transform": null,
14
+ "loftq_config": {},
15
+ "lora_alpha": 32,
16
+ "lora_bias": false,
17
+ "lora_dropout": 0.05,
18
+ "megatron_config": null,
19
+ "megatron_core": "megatron.core",
20
+ "modules_to_save": null,
21
+ "peft_type": "LORA",
22
+ "r": 8,
23
+ "rank_pattern": {},
24
+ "revision": null,
25
+ "target_modules": [
26
+ "v_proj",
27
+ "q_proj"
28
+ ],
29
+ "task_type": "CAUSAL_LM",
30
+ "use_dora": false,
31
+ "use_rslora": false
32
+ }
checkpoint-171259/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc2ac92efb0844d834202ea1d613536dd50e616df8eda94f8c89e201bed86f06
3
+ size 6304096
checkpoint-171259/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a7710709538f8819ff6286a4c95c8df5393304f39024ba417b4fc890617a803
3
+ size 12663802
checkpoint-171259/rng_state_0.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b0fb0b09741ba3e592b67521d975aab0ec9e8ace8bbb6842114679987e2e98f
3
+ size 15984
checkpoint-171259/rng_state_1.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:805f5425815b68d2be45c9f4651e28f44f856353395ae4c02c3bb3d3ec9d35ab
3
+ size 15984
checkpoint-171259/rng_state_2.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa45d707d1bc8544aa53c677c25aced51d6c358df3bb9a739560f4e19bc14dd2
3
+ size 15984
checkpoint-171259/rng_state_3.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3fb0bf7bd29250a46fd68cb36e3ae56d28e2f71479af8153501e016a36e39c03
3
+ size 15984
checkpoint-171259/rng_state_4.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8d6066a4ea635d5144ce49c7d718fd74f727a98ab5e8c20cfc2be5c66250843
3
+ size 15984
checkpoint-171259/rng_state_5.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3387c6e9ba81cbea4d4da547ab0a4ee7a6389963df2de2e87ec09c643ba36b2f
3
+ size 15984
checkpoint-171259/rng_state_6.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40a0e072174e9ef8252ab0df0051f50ebf6c70c241ee55c19a9fe315248cc73a
3
+ size 15984
checkpoint-171259/rng_state_7.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4366f50cfd845daab5ef3aca4ceba2636fb3dfa1c611808c050169b990df6320
3
+ size 15984
checkpoint-171259/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17d36064ed26cea621d34650a6154d3c5da069515d8d7fb74b6a52d98ac066f6
3
+ size 1064
checkpoint-171259/trainer_state.json ADDED
@@ -0,0 +1,2859 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 11.0,
5
+ "eval_steps": 3114,
6
+ "global_step": 171259,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.03211510052026463,
13
+ "grad_norm": 0.7474520206451416,
14
+ "learning_rate": 0.0007978632753120518,
15
+ "loss": 0.5422,
16
+ "step": 500
17
+ },
18
+ {
19
+ "epoch": 0.06423020104052926,
20
+ "grad_norm": 0.89436274766922,
21
+ "learning_rate": 0.0007957265506241035,
22
+ "loss": 0.4574,
23
+ "step": 1000
24
+ },
25
+ {
26
+ "epoch": 0.09634530156079389,
27
+ "grad_norm": 0.8834966421127319,
28
+ "learning_rate": 0.0007935855439227525,
29
+ "loss": 0.4305,
30
+ "step": 1500
31
+ },
32
+ {
33
+ "epoch": 0.12846040208105852,
34
+ "grad_norm": 0.9943472743034363,
35
+ "learning_rate": 0.0007914445372214016,
36
+ "loss": 0.411,
37
+ "step": 2000
38
+ },
39
+ {
40
+ "epoch": 0.16057550260132314,
41
+ "grad_norm": 0.738410472869873,
42
+ "learning_rate": 0.0007893035305200505,
43
+ "loss": 0.3989,
44
+ "step": 2500
45
+ },
46
+ {
47
+ "epoch": 0.19269060312158778,
48
+ "grad_norm": 1.2562445402145386,
49
+ "learning_rate": 0.0007871668058321023,
50
+ "loss": 0.3906,
51
+ "step": 3000
52
+ },
53
+ {
54
+ "epoch": 0.2000128460402081,
55
+ "eval_loss": 0.38617298007011414,
56
+ "eval_runtime": 6.1184,
57
+ "eval_samples_per_second": 81.721,
58
+ "eval_steps_per_second": 5.23,
59
+ "step": 3114
60
+ },
61
+ {
62
+ "epoch": 0.2248057036418524,
63
+ "grad_norm": 0.8588173389434814,
64
+ "learning_rate": 0.0007850257991307513,
65
+ "loss": 0.3847,
66
+ "step": 3500
67
+ },
68
+ {
69
+ "epoch": 0.25692080416211704,
70
+ "grad_norm": 0.8775615096092224,
71
+ "learning_rate": 0.0007828847924294003,
72
+ "loss": 0.3824,
73
+ "step": 4000
74
+ },
75
+ {
76
+ "epoch": 0.28903590468238166,
77
+ "grad_norm": 0.8612831830978394,
78
+ "learning_rate": 0.0007807437857280493,
79
+ "loss": 0.374,
80
+ "step": 4500
81
+ },
82
+ {
83
+ "epoch": 0.3211510052026463,
84
+ "grad_norm": 1.0078139305114746,
85
+ "learning_rate": 0.0007786027790266984,
86
+ "loss": 0.367,
87
+ "step": 5000
88
+ },
89
+ {
90
+ "epoch": 0.3532661057229109,
91
+ "grad_norm": 0.7598047852516174,
92
+ "learning_rate": 0.0007764703363521528,
93
+ "loss": 0.3685,
94
+ "step": 5500
95
+ },
96
+ {
97
+ "epoch": 0.38538120624317557,
98
+ "grad_norm": 0.779103696346283,
99
+ "learning_rate": 0.0007743293296508018,
100
+ "loss": 0.3635,
101
+ "step": 6000
102
+ },
103
+ {
104
+ "epoch": 0.4000256920804162,
105
+ "eval_loss": 0.3629798889160156,
106
+ "eval_runtime": 6.0698,
107
+ "eval_samples_per_second": 82.375,
108
+ "eval_steps_per_second": 5.272,
109
+ "step": 6228
110
+ },
111
+ {
112
+ "epoch": 0.4174963067634402,
113
+ "grad_norm": 1.0634421110153198,
114
+ "learning_rate": 0.0007721883229494509,
115
+ "loss": 0.3585,
116
+ "step": 6500
117
+ },
118
+ {
119
+ "epoch": 0.4496114072837048,
120
+ "grad_norm": 0.7991892099380493,
121
+ "learning_rate": 0.0007700473162480999,
122
+ "loss": 0.3542,
123
+ "step": 7000
124
+ },
125
+ {
126
+ "epoch": 0.4817265078039694,
127
+ "grad_norm": 1.0669060945510864,
128
+ "learning_rate": 0.000767906309546749,
129
+ "loss": 0.3507,
130
+ "step": 7500
131
+ },
132
+ {
133
+ "epoch": 0.5138416083242341,
134
+ "grad_norm": 0.764560878276825,
135
+ "learning_rate": 0.000765765302845398,
136
+ "loss": 0.3502,
137
+ "step": 8000
138
+ },
139
+ {
140
+ "epoch": 0.5459567088444987,
141
+ "grad_norm": 0.882352888584137,
142
+ "learning_rate": 0.000763624296144047,
143
+ "loss": 0.3538,
144
+ "step": 8500
145
+ },
146
+ {
147
+ "epoch": 0.5780718093647633,
148
+ "grad_norm": 1.4643352031707764,
149
+ "learning_rate": 0.000761483289442696,
150
+ "loss": 0.3503,
151
+ "step": 9000
152
+ },
153
+ {
154
+ "epoch": 0.6000385381206244,
155
+ "eval_loss": 0.34752723574638367,
156
+ "eval_runtime": 6.2876,
157
+ "eval_samples_per_second": 79.522,
158
+ "eval_steps_per_second": 5.089,
159
+ "step": 9342
160
+ },
161
+ {
162
+ "epoch": 0.610186909885028,
163
+ "grad_norm": 0.9141144752502441,
164
+ "learning_rate": 0.0007593465647547478,
165
+ "loss": 0.3461,
166
+ "step": 9500
167
+ },
168
+ {
169
+ "epoch": 0.6423020104052926,
170
+ "grad_norm": 0.7810879349708557,
171
+ "learning_rate": 0.0007572055580533967,
172
+ "loss": 0.345,
173
+ "step": 10000
174
+ },
175
+ {
176
+ "epoch": 0.6744171109255572,
177
+ "grad_norm": 0.6038429141044617,
178
+ "learning_rate": 0.0007550645513520458,
179
+ "loss": 0.348,
180
+ "step": 10500
181
+ },
182
+ {
183
+ "epoch": 0.7065322114458218,
184
+ "grad_norm": 0.8717966079711914,
185
+ "learning_rate": 0.0007529235446506948,
186
+ "loss": 0.3387,
187
+ "step": 11000
188
+ },
189
+ {
190
+ "epoch": 0.7386473119660865,
191
+ "grad_norm": 0.8474560976028442,
192
+ "learning_rate": 0.0007507825379493438,
193
+ "loss": 0.3493,
194
+ "step": 11500
195
+ },
196
+ {
197
+ "epoch": 0.7707624124863511,
198
+ "grad_norm": 1.002699375152588,
199
+ "learning_rate": 0.0007486415312479928,
200
+ "loss": 0.3403,
201
+ "step": 12000
202
+ },
203
+ {
204
+ "epoch": 0.8000513841608324,
205
+ "eval_loss": 0.3313402533531189,
206
+ "eval_runtime": 6.1854,
207
+ "eval_samples_per_second": 80.835,
208
+ "eval_steps_per_second": 5.173,
209
+ "step": 12456
210
+ },
211
+ {
212
+ "epoch": 0.8028775130066157,
213
+ "grad_norm": 0.9874686002731323,
214
+ "learning_rate": 0.0007465005245466419,
215
+ "loss": 0.3374,
216
+ "step": 12500
217
+ },
218
+ {
219
+ "epoch": 0.8349926135268804,
220
+ "grad_norm": 0.8380157351493835,
221
+ "learning_rate": 0.000744359517845291,
222
+ "loss": 0.3321,
223
+ "step": 13000
224
+ },
225
+ {
226
+ "epoch": 0.8671077140471449,
227
+ "grad_norm": 0.9501023888587952,
228
+ "learning_rate": 0.0007422270751707453,
229
+ "loss": 0.3384,
230
+ "step": 13500
231
+ },
232
+ {
233
+ "epoch": 0.8992228145674096,
234
+ "grad_norm": 0.8805665373802185,
235
+ "learning_rate": 0.0007400860684693944,
236
+ "loss": 0.3361,
237
+ "step": 14000
238
+ },
239
+ {
240
+ "epoch": 0.9313379150876743,
241
+ "grad_norm": 0.9132845401763916,
242
+ "learning_rate": 0.0007379450617680433,
243
+ "loss": 0.3346,
244
+ "step": 14500
245
+ },
246
+ {
247
+ "epoch": 0.9634530156079388,
248
+ "grad_norm": 1.006101369857788,
249
+ "learning_rate": 0.0007358040550666924,
250
+ "loss": 0.3348,
251
+ "step": 15000
252
+ },
253
+ {
254
+ "epoch": 0.9955681161282035,
255
+ "grad_norm": 0.7373970746994019,
256
+ "learning_rate": 0.0007336630483653414,
257
+ "loss": 0.3331,
258
+ "step": 15500
259
+ },
260
+ {
261
+ "epoch": 1.0000642302010405,
262
+ "eval_loss": 0.33074307441711426,
263
+ "eval_runtime": 6.1886,
264
+ "eval_samples_per_second": 80.793,
265
+ "eval_steps_per_second": 5.171,
266
+ "step": 15570
267
+ },
268
+ {
269
+ "epoch": 1.0276832166484682,
270
+ "grad_norm": 1.116804599761963,
271
+ "learning_rate": 0.0007315263236773932,
272
+ "loss": 0.3198,
273
+ "step": 16000
274
+ },
275
+ {
276
+ "epoch": 1.0597983171687329,
277
+ "grad_norm": 0.8930056095123291,
278
+ "learning_rate": 0.0007293853169760421,
279
+ "loss": 0.3236,
280
+ "step": 16500
281
+ },
282
+ {
283
+ "epoch": 1.0919134176889973,
284
+ "grad_norm": 1.135382056236267,
285
+ "learning_rate": 0.0007272443102746912,
286
+ "loss": 0.3195,
287
+ "step": 17000
288
+ },
289
+ {
290
+ "epoch": 1.124028518209262,
291
+ "grad_norm": 1.1518044471740723,
292
+ "learning_rate": 0.0007251033035733402,
293
+ "loss": 0.3212,
294
+ "step": 17500
295
+ },
296
+ {
297
+ "epoch": 1.1561436187295266,
298
+ "grad_norm": 0.8707193732261658,
299
+ "learning_rate": 0.0007229622968719892,
300
+ "loss": 0.3189,
301
+ "step": 18000
302
+ },
303
+ {
304
+ "epoch": 1.1882587192497913,
305
+ "grad_norm": 1.0189062356948853,
306
+ "learning_rate": 0.0007208212901706382,
307
+ "loss": 0.3239,
308
+ "step": 18500
309
+ },
310
+ {
311
+ "epoch": 1.2000770762412487,
312
+ "eval_loss": 0.31824222207069397,
313
+ "eval_runtime": 6.2671,
314
+ "eval_samples_per_second": 79.782,
315
+ "eval_steps_per_second": 5.106,
316
+ "step": 18684
317
+ },
318
+ {
319
+ "epoch": 1.2203738197700558,
320
+ "grad_norm": 0.9791691303253174,
321
+ "learning_rate": 0.00071868456548269,
322
+ "loss": 0.3208,
323
+ "step": 19000
324
+ },
325
+ {
326
+ "epoch": 1.2524889202903204,
327
+ "grad_norm": 0.6720991134643555,
328
+ "learning_rate": 0.000716543558781339,
329
+ "loss": 0.3196,
330
+ "step": 19500
331
+ },
332
+ {
333
+ "epoch": 1.2846040208105851,
334
+ "grad_norm": 0.8015382289886475,
335
+ "learning_rate": 0.000714402552079988,
336
+ "loss": 0.322,
337
+ "step": 20000
338
+ },
339
+ {
340
+ "epoch": 1.3167191213308498,
341
+ "grad_norm": 0.9411060214042664,
342
+ "learning_rate": 0.0007122615453786371,
343
+ "loss": 0.3178,
344
+ "step": 20500
345
+ },
346
+ {
347
+ "epoch": 1.3488342218511145,
348
+ "grad_norm": 1.2184011936187744,
349
+ "learning_rate": 0.000710120538677286,
350
+ "loss": 0.3157,
351
+ "step": 21000
352
+ },
353
+ {
354
+ "epoch": 1.3809493223713791,
355
+ "grad_norm": 0.9301189184188843,
356
+ "learning_rate": 0.0007079795319759352,
357
+ "loss": 0.3155,
358
+ "step": 21500
359
+ },
360
+ {
361
+ "epoch": 1.4000899222814567,
362
+ "eval_loss": 0.30902907252311707,
363
+ "eval_runtime": 6.1349,
364
+ "eval_samples_per_second": 81.501,
365
+ "eval_steps_per_second": 5.216,
366
+ "step": 21798
367
+ },
368
+ {
369
+ "epoch": 1.4130644228916436,
370
+ "grad_norm": 0.8129053115844727,
371
+ "learning_rate": 0.0007058385252745842,
372
+ "loss": 0.3187,
373
+ "step": 22000
374
+ },
375
+ {
376
+ "epoch": 1.4451795234119083,
377
+ "grad_norm": 0.9045296311378479,
378
+ "learning_rate": 0.0007037018005866359,
379
+ "loss": 0.3184,
380
+ "step": 22500
381
+ },
382
+ {
383
+ "epoch": 1.477294623932173,
384
+ "grad_norm": 1.3381433486938477,
385
+ "learning_rate": 0.0007015607938852849,
386
+ "loss": 0.3161,
387
+ "step": 23000
388
+ },
389
+ {
390
+ "epoch": 1.5094097244524374,
391
+ "grad_norm": 1.2223104238510132,
392
+ "learning_rate": 0.0006994240691973367,
393
+ "loss": 0.3105,
394
+ "step": 23500
395
+ },
396
+ {
397
+ "epoch": 1.541524824972702,
398
+ "grad_norm": 1.6614145040512085,
399
+ "learning_rate": 0.0006972830624959856,
400
+ "loss": 0.312,
401
+ "step": 24000
402
+ },
403
+ {
404
+ "epoch": 1.5736399254929667,
405
+ "grad_norm": 1.0367958545684814,
406
+ "learning_rate": 0.0006951420557946347,
407
+ "loss": 0.3143,
408
+ "step": 24500
409
+ },
410
+ {
411
+ "epoch": 1.6001027683216649,
412
+ "eval_loss": 0.3097926378250122,
413
+ "eval_runtime": 6.0915,
414
+ "eval_samples_per_second": 82.082,
415
+ "eval_steps_per_second": 5.253,
416
+ "step": 24912
417
+ },
418
+ {
419
+ "epoch": 1.6057550260132314,
420
+ "grad_norm": 0.9055228233337402,
421
+ "learning_rate": 0.0006930010490932837,
422
+ "loss": 0.3142,
423
+ "step": 25000
424
+ },
425
+ {
426
+ "epoch": 1.637870126533496,
427
+ "grad_norm": 1.0741256475448608,
428
+ "learning_rate": 0.0006908600423919327,
429
+ "loss": 0.3172,
430
+ "step": 25500
431
+ },
432
+ {
433
+ "epoch": 1.6699852270537607,
434
+ "grad_norm": 0.8932151198387146,
435
+ "learning_rate": 0.0006887233177039845,
436
+ "loss": 0.3117,
437
+ "step": 26000
438
+ },
439
+ {
440
+ "epoch": 1.7021003275740254,
441
+ "grad_norm": 1.035973310470581,
442
+ "learning_rate": 0.0006865823110026335,
443
+ "loss": 0.313,
444
+ "step": 26500
445
+ },
446
+ {
447
+ "epoch": 1.73421542809429,
448
+ "grad_norm": 0.9380423426628113,
449
+ "learning_rate": 0.0006844455863146852,
450
+ "loss": 0.3083,
451
+ "step": 27000
452
+ },
453
+ {
454
+ "epoch": 1.7663305286145545,
455
+ "grad_norm": 0.8082458972930908,
456
+ "learning_rate": 0.0006823045796133342,
457
+ "loss": 0.3054,
458
+ "step": 27500
459
+ },
460
+ {
461
+ "epoch": 1.7984456291348192,
462
+ "grad_norm": 0.646691620349884,
463
+ "learning_rate": 0.0006801635729119833,
464
+ "loss": 0.3114,
465
+ "step": 28000
466
+ },
467
+ {
468
+ "epoch": 1.8001156143618728,
469
+ "eval_loss": 0.3078465163707733,
470
+ "eval_runtime": 6.2786,
471
+ "eval_samples_per_second": 79.635,
472
+ "eval_steps_per_second": 5.097,
473
+ "step": 28026
474
+ },
475
+ {
476
+ "epoch": 1.8305607296550839,
477
+ "grad_norm": 0.8007400035858154,
478
+ "learning_rate": 0.0006780225662106322,
479
+ "loss": 0.3151,
480
+ "step": 28500
481
+ },
482
+ {
483
+ "epoch": 1.8626758301753483,
484
+ "grad_norm": 0.8854690194129944,
485
+ "learning_rate": 0.0006758815595092813,
486
+ "loss": 0.3094,
487
+ "step": 29000
488
+ },
489
+ {
490
+ "epoch": 1.894790930695613,
491
+ "grad_norm": 0.9263831377029419,
492
+ "learning_rate": 0.0006737405528079303,
493
+ "loss": 0.3063,
494
+ "step": 29500
495
+ },
496
+ {
497
+ "epoch": 1.9269060312158777,
498
+ "grad_norm": 0.946422815322876,
499
+ "learning_rate": 0.0006715995461065794,
500
+ "loss": 0.3061,
501
+ "step": 30000
502
+ },
503
+ {
504
+ "epoch": 1.9590211317361423,
505
+ "grad_norm": 0.9862657785415649,
506
+ "learning_rate": 0.0006694585394052283,
507
+ "loss": 0.306,
508
+ "step": 30500
509
+ },
510
+ {
511
+ "epoch": 1.991136232256407,
512
+ "grad_norm": 1.5128605365753174,
513
+ "learning_rate": 0.0006673175327038774,
514
+ "loss": 0.305,
515
+ "step": 31000
516
+ },
517
+ {
518
+ "epoch": 2.000128460402081,
519
+ "eval_loss": 0.30374282598495483,
520
+ "eval_runtime": 6.3123,
521
+ "eval_samples_per_second": 79.21,
522
+ "eval_steps_per_second": 5.069,
523
+ "step": 31140
524
+ },
525
+ {
526
+ "epoch": 2.0232513327766717,
527
+ "grad_norm": 0.9001232385635376,
528
+ "learning_rate": 0.0006651765260025264,
529
+ "loss": 0.2992,
530
+ "step": 31500
531
+ },
532
+ {
533
+ "epoch": 2.0553664332969364,
534
+ "grad_norm": 0.6829231381416321,
535
+ "learning_rate": 0.0006630355193011754,
536
+ "loss": 0.2984,
537
+ "step": 32000
538
+ },
539
+ {
540
+ "epoch": 2.087481533817201,
541
+ "grad_norm": 1.010355830192566,
542
+ "learning_rate": 0.0006608945125998244,
543
+ "loss": 0.293,
544
+ "step": 32500
545
+ },
546
+ {
547
+ "epoch": 2.1195966343374657,
548
+ "grad_norm": 0.8349985480308533,
549
+ "learning_rate": 0.0006587620699252789,
550
+ "loss": 0.2984,
551
+ "step": 33000
552
+ },
553
+ {
554
+ "epoch": 2.15171173485773,
555
+ "grad_norm": 1.2974556684494019,
556
+ "learning_rate": 0.0006566210632239279,
557
+ "loss": 0.2977,
558
+ "step": 33500
559
+ },
560
+ {
561
+ "epoch": 2.1838268353779946,
562
+ "grad_norm": 1.0526032447814941,
563
+ "learning_rate": 0.0006544800565225769,
564
+ "loss": 0.2934,
565
+ "step": 34000
566
+ },
567
+ {
568
+ "epoch": 2.2001413064422892,
569
+ "eval_loss": 0.2976307272911072,
570
+ "eval_runtime": 6.1928,
571
+ "eval_samples_per_second": 80.739,
572
+ "eval_steps_per_second": 5.167,
573
+ "step": 34254
574
+ },
575
+ {
576
+ "epoch": 2.2159419358982593,
577
+ "grad_norm": 1.0207550525665283,
578
+ "learning_rate": 0.000652339049821226,
579
+ "loss": 0.2999,
580
+ "step": 34500
581
+ },
582
+ {
583
+ "epoch": 2.248057036418524,
584
+ "grad_norm": 0.7849873900413513,
585
+ "learning_rate": 0.0006501980431198749,
586
+ "loss": 0.3017,
587
+ "step": 35000
588
+ },
589
+ {
590
+ "epoch": 2.2801721369387886,
591
+ "grad_norm": 0.807049572467804,
592
+ "learning_rate": 0.000648057036418524,
593
+ "loss": 0.2931,
594
+ "step": 35500
595
+ },
596
+ {
597
+ "epoch": 2.3122872374590533,
598
+ "grad_norm": 0.9243353605270386,
599
+ "learning_rate": 0.0006459160297171731,
600
+ "loss": 0.2956,
601
+ "step": 36000
602
+ },
603
+ {
604
+ "epoch": 2.344402337979318,
605
+ "grad_norm": 1.2570290565490723,
606
+ "learning_rate": 0.0006437750230158221,
607
+ "loss": 0.2958,
608
+ "step": 36500
609
+ },
610
+ {
611
+ "epoch": 2.3765174384995826,
612
+ "grad_norm": 0.8402499556541443,
613
+ "learning_rate": 0.0006416340163144711,
614
+ "loss": 0.2983,
615
+ "step": 37000
616
+ },
617
+ {
618
+ "epoch": 2.4001541524824974,
619
+ "eval_loss": 0.29359912872314453,
620
+ "eval_runtime": 6.2635,
621
+ "eval_samples_per_second": 79.828,
622
+ "eval_steps_per_second": 5.109,
623
+ "step": 37368
624
+ },
625
+ {
626
+ "epoch": 2.4086325390198473,
627
+ "grad_norm": 0.8177831768989563,
628
+ "learning_rate": 0.0006394930096131202,
629
+ "loss": 0.2949,
630
+ "step": 37500
631
+ },
632
+ {
633
+ "epoch": 2.4407476395401115,
634
+ "grad_norm": 1.740903377532959,
635
+ "learning_rate": 0.0006373562849251718,
636
+ "loss": 0.2923,
637
+ "step": 38000
638
+ },
639
+ {
640
+ "epoch": 2.472862740060376,
641
+ "grad_norm": 0.799891471862793,
642
+ "learning_rate": 0.0006352195602372236,
643
+ "loss": 0.291,
644
+ "step": 38500
645
+ },
646
+ {
647
+ "epoch": 2.504977840580641,
648
+ "grad_norm": 0.7158030867576599,
649
+ "learning_rate": 0.0006330828355492754,
650
+ "loss": 0.2913,
651
+ "step": 39000
652
+ },
653
+ {
654
+ "epoch": 2.5370929411009056,
655
+ "grad_norm": 1.3045659065246582,
656
+ "learning_rate": 0.0006309418288479243,
657
+ "loss": 0.2888,
658
+ "step": 39500
659
+ },
660
+ {
661
+ "epoch": 2.5692080416211702,
662
+ "grad_norm": 0.874169647693634,
663
+ "learning_rate": 0.0006288008221465734,
664
+ "loss": 0.295,
665
+ "step": 40000
666
+ },
667
+ {
668
+ "epoch": 2.600166998522705,
669
+ "eval_loss": 0.29626408219337463,
670
+ "eval_runtime": 6.2399,
671
+ "eval_samples_per_second": 80.129,
672
+ "eval_steps_per_second": 5.128,
673
+ "step": 40482
674
+ },
675
+ {
676
+ "epoch": 2.601323142141435,
677
+ "grad_norm": 0.9616047739982605,
678
+ "learning_rate": 0.0006266598154452224,
679
+ "loss": 0.2929,
680
+ "step": 40500
681
+ },
682
+ {
683
+ "epoch": 2.6334382426616996,
684
+ "grad_norm": 0.8935590386390686,
685
+ "learning_rate": 0.0006245188087438714,
686
+ "loss": 0.292,
687
+ "step": 41000
688
+ },
689
+ {
690
+ "epoch": 2.6655533431819642,
691
+ "grad_norm": 0.7435338497161865,
692
+ "learning_rate": 0.0006223778020425204,
693
+ "loss": 0.2904,
694
+ "step": 41500
695
+ },
696
+ {
697
+ "epoch": 2.697668443702229,
698
+ "grad_norm": 0.8259156346321106,
699
+ "learning_rate": 0.0006202367953411695,
700
+ "loss": 0.2868,
701
+ "step": 42000
702
+ },
703
+ {
704
+ "epoch": 2.729783544222493,
705
+ "grad_norm": 0.848035454750061,
706
+ "learning_rate": 0.0006180957886398185,
707
+ "loss": 0.2876,
708
+ "step": 42500
709
+ },
710
+ {
711
+ "epoch": 2.7618986447427583,
712
+ "grad_norm": 0.9523009657859802,
713
+ "learning_rate": 0.0006159547819384675,
714
+ "loss": 0.2891,
715
+ "step": 43000
716
+ },
717
+ {
718
+ "epoch": 2.7940137452630225,
719
+ "grad_norm": 0.7964786887168884,
720
+ "learning_rate": 0.0006138137752371165,
721
+ "loss": 0.2932,
722
+ "step": 43500
723
+ },
724
+ {
725
+ "epoch": 2.8001798445629134,
726
+ "eval_loss": 0.2853344976902008,
727
+ "eval_runtime": 6.2618,
728
+ "eval_samples_per_second": 79.849,
729
+ "eval_steps_per_second": 5.11,
730
+ "step": 43596
731
+ },
732
+ {
733
+ "epoch": 2.826128845783287,
734
+ "grad_norm": 1.2521328926086426,
735
+ "learning_rate": 0.0006116727685357656,
736
+ "loss": 0.2884,
737
+ "step": 44000
738
+ },
739
+ {
740
+ "epoch": 2.858243946303552,
741
+ "grad_norm": 1.0122313499450684,
742
+ "learning_rate": 0.0006095317618344145,
743
+ "loss": 0.2896,
744
+ "step": 44500
745
+ },
746
+ {
747
+ "epoch": 2.8903590468238165,
748
+ "grad_norm": 0.9321467280387878,
749
+ "learning_rate": 0.0006073907551330636,
750
+ "loss": 0.2893,
751
+ "step": 45000
752
+ },
753
+ {
754
+ "epoch": 2.922474147344081,
755
+ "grad_norm": 1.090589165687561,
756
+ "learning_rate": 0.0006052497484317126,
757
+ "loss": 0.2916,
758
+ "step": 45500
759
+ },
760
+ {
761
+ "epoch": 2.954589247864346,
762
+ "grad_norm": 1.4329869747161865,
763
+ "learning_rate": 0.0006031087417303616,
764
+ "loss": 0.2871,
765
+ "step": 46000
766
+ },
767
+ {
768
+ "epoch": 2.9867043483846105,
769
+ "grad_norm": 0.9811238646507263,
770
+ "learning_rate": 0.0006009677350290106,
771
+ "loss": 0.2866,
772
+ "step": 46500
773
+ },
774
+ {
775
+ "epoch": 3.0001926906031215,
776
+ "eval_loss": 0.27923065423965454,
777
+ "eval_runtime": 6.1503,
778
+ "eval_samples_per_second": 81.297,
779
+ "eval_steps_per_second": 5.203,
780
+ "step": 46710
781
+ },
782
+ {
783
+ "epoch": 3.018819448904875,
784
+ "grad_norm": 1.1142631769180298,
785
+ "learning_rate": 0.0005988267283276597,
786
+ "loss": 0.2795,
787
+ "step": 47000
788
+ },
789
+ {
790
+ "epoch": 3.05093454942514,
791
+ "grad_norm": 1.1772897243499756,
792
+ "learning_rate": 0.000596694285653114,
793
+ "loss": 0.2744,
794
+ "step": 47500
795
+ },
796
+ {
797
+ "epoch": 3.0830496499454045,
798
+ "grad_norm": 0.8586742281913757,
799
+ "learning_rate": 0.0005945532789517631,
800
+ "loss": 0.2755,
801
+ "step": 48000
802
+ },
803
+ {
804
+ "epoch": 3.1151647504656688,
805
+ "grad_norm": 0.8394906520843506,
806
+ "learning_rate": 0.0005924122722504122,
807
+ "loss": 0.2723,
808
+ "step": 48500
809
+ },
810
+ {
811
+ "epoch": 3.1472798509859334,
812
+ "grad_norm": 0.7916896343231201,
813
+ "learning_rate": 0.0005902712655490611,
814
+ "loss": 0.276,
815
+ "step": 49000
816
+ },
817
+ {
818
+ "epoch": 3.179394951506198,
819
+ "grad_norm": 1.0174874067306519,
820
+ "learning_rate": 0.0005881302588477102,
821
+ "loss": 0.2784,
822
+ "step": 49500
823
+ },
824
+ {
825
+ "epoch": 3.2002055366433297,
826
+ "eval_loss": 0.2808820605278015,
827
+ "eval_runtime": 6.3599,
828
+ "eval_samples_per_second": 78.618,
829
+ "eval_steps_per_second": 5.032,
830
+ "step": 49824
831
+ },
832
+ {
833
+ "epoch": 3.211510052026463,
834
+ "grad_norm": 1.0870943069458008,
835
+ "learning_rate": 0.0005859892521463592,
836
+ "loss": 0.2763,
837
+ "step": 50000
838
+ },
839
+ {
840
+ "epoch": 3.2436251525467275,
841
+ "grad_norm": 0.8911240100860596,
842
+ "learning_rate": 0.0005838482454450083,
843
+ "loss": 0.2752,
844
+ "step": 50500
845
+ },
846
+ {
847
+ "epoch": 3.275740253066992,
848
+ "grad_norm": 0.9789806604385376,
849
+ "learning_rate": 0.0005817072387436573,
850
+ "loss": 0.277,
851
+ "step": 51000
852
+ },
853
+ {
854
+ "epoch": 3.307855353587257,
855
+ "grad_norm": 0.8887168765068054,
856
+ "learning_rate": 0.0005795662320423064,
857
+ "loss": 0.2791,
858
+ "step": 51500
859
+ },
860
+ {
861
+ "epoch": 3.3399704541075215,
862
+ "grad_norm": 1.4221384525299072,
863
+ "learning_rate": 0.0005774337893677607,
864
+ "loss": 0.2723,
865
+ "step": 52000
866
+ },
867
+ {
868
+ "epoch": 3.372085554627786,
869
+ "grad_norm": 0.8437333703041077,
870
+ "learning_rate": 0.0005752927826664098,
871
+ "loss": 0.276,
872
+ "step": 52500
873
+ },
874
+ {
875
+ "epoch": 3.400218382683538,
876
+ "eval_loss": 0.27559131383895874,
877
+ "eval_runtime": 6.1838,
878
+ "eval_samples_per_second": 80.856,
879
+ "eval_steps_per_second": 5.175,
880
+ "step": 52938
881
+ },
882
+ {
883
+ "epoch": 3.404200655148051,
884
+ "grad_norm": 1.0225434303283691,
885
+ "learning_rate": 0.0005731517759650589,
886
+ "loss": 0.2833,
887
+ "step": 53000
888
+ },
889
+ {
890
+ "epoch": 3.436315755668315,
891
+ "grad_norm": 1.0217552185058594,
892
+ "learning_rate": 0.0005710107692637078,
893
+ "loss": 0.2773,
894
+ "step": 53500
895
+ },
896
+ {
897
+ "epoch": 3.4684308561885797,
898
+ "grad_norm": 1.426587462425232,
899
+ "learning_rate": 0.0005688697625623569,
900
+ "loss": 0.2804,
901
+ "step": 54000
902
+ },
903
+ {
904
+ "epoch": 3.5005459567088444,
905
+ "grad_norm": 1.0069812536239624,
906
+ "learning_rate": 0.0005667287558610059,
907
+ "loss": 0.2778,
908
+ "step": 54500
909
+ },
910
+ {
911
+ "epoch": 3.532661057229109,
912
+ "grad_norm": 1.2639676332473755,
913
+ "learning_rate": 0.000564587749159655,
914
+ "loss": 0.2762,
915
+ "step": 55000
916
+ },
917
+ {
918
+ "epoch": 3.5647761577493737,
919
+ "grad_norm": 0.8566103577613831,
920
+ "learning_rate": 0.0005624510244717066,
921
+ "loss": 0.2781,
922
+ "step": 55500
923
+ },
924
+ {
925
+ "epoch": 3.5968912582696384,
926
+ "grad_norm": 0.9815769791603088,
927
+ "learning_rate": 0.0005603100177703557,
928
+ "loss": 0.2727,
929
+ "step": 56000
930
+ },
931
+ {
932
+ "epoch": 3.6002312287237457,
933
+ "eval_loss": 0.2685372233390808,
934
+ "eval_runtime": 6.1368,
935
+ "eval_samples_per_second": 81.476,
936
+ "eval_steps_per_second": 5.214,
937
+ "step": 56052
938
+ },
939
+ {
940
+ "epoch": 3.629006358789903,
941
+ "grad_norm": 0.9768785238265991,
942
+ "learning_rate": 0.0005581690110690047,
943
+ "loss": 0.2751,
944
+ "step": 56500
945
+ },
946
+ {
947
+ "epoch": 3.6611214593101677,
948
+ "grad_norm": 0.8992444276809692,
949
+ "learning_rate": 0.0005560280043676537,
950
+ "loss": 0.2725,
951
+ "step": 57000
952
+ },
953
+ {
954
+ "epoch": 3.6932365598304324,
955
+ "grad_norm": 0.8468559980392456,
956
+ "learning_rate": 0.0005538869976663027,
957
+ "loss": 0.2718,
958
+ "step": 57500
959
+ },
960
+ {
961
+ "epoch": 3.7253516603506966,
962
+ "grad_norm": 1.1695789098739624,
963
+ "learning_rate": 0.0005517459909649518,
964
+ "loss": 0.2754,
965
+ "step": 58000
966
+ },
967
+ {
968
+ "epoch": 3.7574667608709618,
969
+ "grad_norm": 1.326716661453247,
970
+ "learning_rate": 0.0005496092662770034,
971
+ "loss": 0.2691,
972
+ "step": 58500
973
+ },
974
+ {
975
+ "epoch": 3.789581861391226,
976
+ "grad_norm": 1.0453835725784302,
977
+ "learning_rate": 0.0005474682595756525,
978
+ "loss": 0.2704,
979
+ "step": 59000
980
+ },
981
+ {
982
+ "epoch": 3.800244074763954,
983
+ "eval_loss": 0.2709694802761078,
984
+ "eval_runtime": 6.0969,
985
+ "eval_samples_per_second": 82.009,
986
+ "eval_steps_per_second": 5.249,
987
+ "step": 59166
988
+ },
989
+ {
990
+ "epoch": 3.8216969619114907,
991
+ "grad_norm": 0.901626706123352,
992
+ "learning_rate": 0.0005453272528743015,
993
+ "loss": 0.2757,
994
+ "step": 59500
995
+ },
996
+ {
997
+ "epoch": 3.8538120624317553,
998
+ "grad_norm": 1.053933024406433,
999
+ "learning_rate": 0.0005431862461729505,
1000
+ "loss": 0.268,
1001
+ "step": 60000
1002
+ },
1003
+ {
1004
+ "epoch": 3.88592716295202,
1005
+ "grad_norm": 0.8032711148262024,
1006
+ "learning_rate": 0.0005410452394715995,
1007
+ "loss": 0.2717,
1008
+ "step": 60500
1009
+ },
1010
+ {
1011
+ "epoch": 3.9180422634722847,
1012
+ "grad_norm": 0.768291175365448,
1013
+ "learning_rate": 0.0005389042327702486,
1014
+ "loss": 0.2718,
1015
+ "step": 61000
1016
+ },
1017
+ {
1018
+ "epoch": 3.9501573639925494,
1019
+ "grad_norm": 0.800992488861084,
1020
+ "learning_rate": 0.0005367632260688976,
1021
+ "loss": 0.2734,
1022
+ "step": 61500
1023
+ },
1024
+ {
1025
+ "epoch": 3.982272464512814,
1026
+ "grad_norm": 0.8997211456298828,
1027
+ "learning_rate": 0.0005346222193675466,
1028
+ "loss": 0.2711,
1029
+ "step": 62000
1030
+ },
1031
+ {
1032
+ "epoch": 4.000256920804162,
1033
+ "eval_loss": 0.2678879499435425,
1034
+ "eval_runtime": 6.1221,
1035
+ "eval_samples_per_second": 81.671,
1036
+ "eval_steps_per_second": 5.227,
1037
+ "step": 62280
1038
+ },
1039
+ {
1040
+ "epoch": 4.014387565033078,
1041
+ "grad_norm": 1.0873627662658691,
1042
+ "learning_rate": 0.0005324854946795984,
1043
+ "loss": 0.264,
1044
+ "step": 62500
1045
+ },
1046
+ {
1047
+ "epoch": 4.046502665553343,
1048
+ "grad_norm": 0.883162796497345,
1049
+ "learning_rate": 0.0005303444879782474,
1050
+ "loss": 0.2574,
1051
+ "step": 63000
1052
+ },
1053
+ {
1054
+ "epoch": 4.078617766073608,
1055
+ "grad_norm": 1.1101760864257812,
1056
+ "learning_rate": 0.0005282077632902991,
1057
+ "loss": 0.2584,
1058
+ "step": 63500
1059
+ },
1060
+ {
1061
+ "epoch": 4.110732866593873,
1062
+ "grad_norm": 0.9394090175628662,
1063
+ "learning_rate": 0.0005260667565889481,
1064
+ "loss": 0.2679,
1065
+ "step": 64000
1066
+ },
1067
+ {
1068
+ "epoch": 4.142847967114137,
1069
+ "grad_norm": 1.0847160816192627,
1070
+ "learning_rate": 0.0005239257498875972,
1071
+ "loss": 0.2625,
1072
+ "step": 64500
1073
+ },
1074
+ {
1075
+ "epoch": 4.174963067634402,
1076
+ "grad_norm": 1.3673467636108398,
1077
+ "learning_rate": 0.0005217847431862462,
1078
+ "loss": 0.2601,
1079
+ "step": 65000
1080
+ },
1081
+ {
1082
+ "epoch": 4.20026976684437,
1083
+ "eval_loss": 0.2655960023403168,
1084
+ "eval_runtime": 6.1994,
1085
+ "eval_samples_per_second": 80.653,
1086
+ "eval_steps_per_second": 5.162,
1087
+ "step": 65394
1088
+ },
1089
+ {
1090
+ "epoch": 4.207078168154666,
1091
+ "grad_norm": 0.7866288423538208,
1092
+ "learning_rate": 0.0005196437364848953,
1093
+ "loss": 0.261,
1094
+ "step": 65500
1095
+ },
1096
+ {
1097
+ "epoch": 4.239193268674931,
1098
+ "grad_norm": 0.9261956214904785,
1099
+ "learning_rate": 0.0005175027297835442,
1100
+ "loss": 0.2598,
1101
+ "step": 66000
1102
+ },
1103
+ {
1104
+ "epoch": 4.271308369195196,
1105
+ "grad_norm": 0.9405527710914612,
1106
+ "learning_rate": 0.0005153617230821933,
1107
+ "loss": 0.2634,
1108
+ "step": 66500
1109
+ },
1110
+ {
1111
+ "epoch": 4.30342346971546,
1112
+ "grad_norm": 0.8403156995773315,
1113
+ "learning_rate": 0.0005132207163808423,
1114
+ "loss": 0.2635,
1115
+ "step": 67000
1116
+ },
1117
+ {
1118
+ "epoch": 4.335538570235725,
1119
+ "grad_norm": 1.2083462476730347,
1120
+ "learning_rate": 0.000511083991692894,
1121
+ "loss": 0.2565,
1122
+ "step": 67500
1123
+ },
1124
+ {
1125
+ "epoch": 4.367653670755989,
1126
+ "grad_norm": 0.9391843676567078,
1127
+ "learning_rate": 0.000508942984991543,
1128
+ "loss": 0.2668,
1129
+ "step": 68000
1130
+ },
1131
+ {
1132
+ "epoch": 4.399768771276254,
1133
+ "grad_norm": 1.0808571577072144,
1134
+ "learning_rate": 0.0005068019782901921,
1135
+ "loss": 0.2574,
1136
+ "step": 68500
1137
+ },
1138
+ {
1139
+ "epoch": 4.4002826128845784,
1140
+ "eval_loss": 0.26204103231430054,
1141
+ "eval_runtime": 6.1364,
1142
+ "eval_samples_per_second": 81.481,
1143
+ "eval_steps_per_second": 5.215,
1144
+ "step": 68508
1145
+ },
1146
+ {
1147
+ "epoch": 4.4318838717965185,
1148
+ "grad_norm": 1.1670928001403809,
1149
+ "learning_rate": 0.0005046609715888412,
1150
+ "loss": 0.2589,
1151
+ "step": 69000
1152
+ },
1153
+ {
1154
+ "epoch": 4.463998972316784,
1155
+ "grad_norm": 0.8508691787719727,
1156
+ "learning_rate": 0.0005025285289142955,
1157
+ "loss": 0.2605,
1158
+ "step": 69500
1159
+ },
1160
+ {
1161
+ "epoch": 4.496114072837048,
1162
+ "grad_norm": 0.9354087114334106,
1163
+ "learning_rate": 0.0005003875222129446,
1164
+ "loss": 0.2568,
1165
+ "step": 70000
1166
+ },
1167
+ {
1168
+ "epoch": 4.528229173357312,
1169
+ "grad_norm": 0.9175921678543091,
1170
+ "learning_rate": 0.0004982465155115936,
1171
+ "loss": 0.2598,
1172
+ "step": 70500
1173
+ },
1174
+ {
1175
+ "epoch": 4.560344273877577,
1176
+ "grad_norm": 0.8964270353317261,
1177
+ "learning_rate": 0.0004961055088102426,
1178
+ "loss": 0.2571,
1179
+ "step": 71000
1180
+ },
1181
+ {
1182
+ "epoch": 4.592459374397842,
1183
+ "grad_norm": 0.9656769037246704,
1184
+ "learning_rate": 0.0004939645021088916,
1185
+ "loss": 0.255,
1186
+ "step": 71500
1187
+ },
1188
+ {
1189
+ "epoch": 4.600295458924786,
1190
+ "eval_loss": 0.25487568974494934,
1191
+ "eval_runtime": 6.2382,
1192
+ "eval_samples_per_second": 80.151,
1193
+ "eval_steps_per_second": 5.13,
1194
+ "step": 71622
1195
+ },
1196
+ {
1197
+ "epoch": 4.624574474918107,
1198
+ "grad_norm": 0.7101718187332153,
1199
+ "learning_rate": 0.0004918277774209434,
1200
+ "loss": 0.2608,
1201
+ "step": 72000
1202
+ },
1203
+ {
1204
+ "epoch": 4.656689575438371,
1205
+ "grad_norm": 1.1697875261306763,
1206
+ "learning_rate": 0.0004896867707195923,
1207
+ "loss": 0.255,
1208
+ "step": 72500
1209
+ },
1210
+ {
1211
+ "epoch": 4.688804675958636,
1212
+ "grad_norm": 1.2475125789642334,
1213
+ "learning_rate": 0.0004875457640182414,
1214
+ "loss": 0.255,
1215
+ "step": 73000
1216
+ },
1217
+ {
1218
+ "epoch": 4.7209197764789,
1219
+ "grad_norm": 1.0576250553131104,
1220
+ "learning_rate": 0.0004854047573168904,
1221
+ "loss": 0.2561,
1222
+ "step": 73500
1223
+ },
1224
+ {
1225
+ "epoch": 4.753034876999165,
1226
+ "grad_norm": 1.0766135454177856,
1227
+ "learning_rate": 0.00048326375061553945,
1228
+ "loss": 0.253,
1229
+ "step": 74000
1230
+ },
1231
+ {
1232
+ "epoch": 4.7851499775194295,
1233
+ "grad_norm": 1.3369269371032715,
1234
+ "learning_rate": 0.00048112274391418845,
1235
+ "loss": 0.2555,
1236
+ "step": 74500
1237
+ },
1238
+ {
1239
+ "epoch": 4.800308304964995,
1240
+ "eval_loss": 0.2528543770313263,
1241
+ "eval_runtime": 6.247,
1242
+ "eval_samples_per_second": 80.039,
1243
+ "eval_steps_per_second": 5.122,
1244
+ "step": 74736
1245
+ },
1246
+ {
1247
+ "epoch": 4.817265078039695,
1248
+ "grad_norm": 0.7865862846374512,
1249
+ "learning_rate": 0.0004789817372128375,
1250
+ "loss": 0.2556,
1251
+ "step": 75000
1252
+ },
1253
+ {
1254
+ "epoch": 4.849380178559959,
1255
+ "grad_norm": 0.8501922488212585,
1256
+ "learning_rate": 0.0004768407305114865,
1257
+ "loss": 0.2499,
1258
+ "step": 75500
1259
+ },
1260
+ {
1261
+ "epoch": 4.881495279080223,
1262
+ "grad_norm": 0.8366358280181885,
1263
+ "learning_rate": 0.0004747040058235382,
1264
+ "loss": 0.255,
1265
+ "step": 76000
1266
+ },
1267
+ {
1268
+ "epoch": 4.913610379600488,
1269
+ "grad_norm": 1.2609165906906128,
1270
+ "learning_rate": 0.00047256299912218727,
1271
+ "loss": 0.2571,
1272
+ "step": 76500
1273
+ },
1274
+ {
1275
+ "epoch": 4.945725480120752,
1276
+ "grad_norm": 1.1370539665222168,
1277
+ "learning_rate": 0.00047042199242083627,
1278
+ "loss": 0.2535,
1279
+ "step": 77000
1280
+ },
1281
+ {
1282
+ "epoch": 4.9778405806410175,
1283
+ "grad_norm": 0.8656703233718872,
1284
+ "learning_rate": 0.00046828098571948527,
1285
+ "loss": 0.2546,
1286
+ "step": 77500
1287
+ },
1288
+ {
1289
+ "epoch": 5.000321151005203,
1290
+ "eval_loss": 0.24991726875305176,
1291
+ "eval_runtime": 6.0887,
1292
+ "eval_samples_per_second": 82.12,
1293
+ "eval_steps_per_second": 5.256,
1294
+ "step": 77850
1295
+ },
1296
+ {
1297
+ "epoch": 5.009955681161282,
1298
+ "grad_norm": 0.8978527784347534,
1299
+ "learning_rate": 0.0004661485430449397,
1300
+ "loss": 0.2539,
1301
+ "step": 78000
1302
+ },
1303
+ {
1304
+ "epoch": 5.042070781681547,
1305
+ "grad_norm": 0.7171711921691895,
1306
+ "learning_rate": 0.00046400753634358874,
1307
+ "loss": 0.2436,
1308
+ "step": 78500
1309
+ },
1310
+ {
1311
+ "epoch": 5.074185882201811,
1312
+ "grad_norm": 1.4531939029693604,
1313
+ "learning_rate": 0.00046186652964223785,
1314
+ "loss": 0.2419,
1315
+ "step": 79000
1316
+ },
1317
+ {
1318
+ "epoch": 5.106300982722076,
1319
+ "grad_norm": 1.0456002950668335,
1320
+ "learning_rate": 0.00045972552294088685,
1321
+ "loss": 0.2428,
1322
+ "step": 79500
1323
+ },
1324
+ {
1325
+ "epoch": 5.1384160832423404,
1326
+ "grad_norm": 0.9373497366905212,
1327
+ "learning_rate": 0.0004575845162395359,
1328
+ "loss": 0.2456,
1329
+ "step": 80000
1330
+ },
1331
+ {
1332
+ "epoch": 5.170531183762606,
1333
+ "grad_norm": 0.8763256669044495,
1334
+ "learning_rate": 0.0004554435095381849,
1335
+ "loss": 0.2437,
1336
+ "step": 80500
1337
+ },
1338
+ {
1339
+ "epoch": 5.20033399704541,
1340
+ "eval_loss": 0.24868439137935638,
1341
+ "eval_runtime": 6.2785,
1342
+ "eval_samples_per_second": 79.637,
1343
+ "eval_steps_per_second": 5.097,
1344
+ "step": 80964
1345
+ },
1346
+ {
1347
+ "epoch": 5.20264628428287,
1348
+ "grad_norm": 0.8444309830665588,
1349
+ "learning_rate": 0.0004533067848502366,
1350
+ "loss": 0.2419,
1351
+ "step": 81000
1352
+ },
1353
+ {
1354
+ "epoch": 5.234761384803134,
1355
+ "grad_norm": 1.0627179145812988,
1356
+ "learning_rate": 0.00045116577814888567,
1357
+ "loss": 0.2421,
1358
+ "step": 81500
1359
+ },
1360
+ {
1361
+ "epoch": 5.266876485323399,
1362
+ "grad_norm": 0.8434215784072876,
1363
+ "learning_rate": 0.00044902477144753467,
1364
+ "loss": 0.2451,
1365
+ "step": 82000
1366
+ },
1367
+ {
1368
+ "epoch": 5.298991585843663,
1369
+ "grad_norm": 0.9707222580909729,
1370
+ "learning_rate": 0.0004468837647461837,
1371
+ "loss": 0.2454,
1372
+ "step": 82500
1373
+ },
1374
+ {
1375
+ "epoch": 5.3311066863639285,
1376
+ "grad_norm": 0.8533704280853271,
1377
+ "learning_rate": 0.0004447427580448327,
1378
+ "loss": 0.239,
1379
+ "step": 83000
1380
+ },
1381
+ {
1382
+ "epoch": 5.363221786884193,
1383
+ "grad_norm": 0.8274517059326172,
1384
+ "learning_rate": 0.0004426017513434817,
1385
+ "loss": 0.2452,
1386
+ "step": 83500
1387
+ },
1388
+ {
1389
+ "epoch": 5.395336887404458,
1390
+ "grad_norm": 0.729799211025238,
1391
+ "learning_rate": 0.0004404607446421308,
1392
+ "loss": 0.2457,
1393
+ "step": 84000
1394
+ },
1395
+ {
1396
+ "epoch": 5.400346843085619,
1397
+ "eval_loss": 0.24482692778110504,
1398
+ "eval_runtime": 6.0807,
1399
+ "eval_samples_per_second": 82.228,
1400
+ "eval_steps_per_second": 5.263,
1401
+ "step": 84078
1402
+ },
1403
+ {
1404
+ "epoch": 5.427451987924722,
1405
+ "grad_norm": 0.7857894897460938,
1406
+ "learning_rate": 0.0004383197379407798,
1407
+ "loss": 0.2487,
1408
+ "step": 84500
1409
+ },
1410
+ {
1411
+ "epoch": 5.459567088444987,
1412
+ "grad_norm": 0.9915536642074585,
1413
+ "learning_rate": 0.0004361830132528315,
1414
+ "loss": 0.2456,
1415
+ "step": 85000
1416
+ },
1417
+ {
1418
+ "epoch": 5.491682188965251,
1419
+ "grad_norm": 0.971025288105011,
1420
+ "learning_rate": 0.00043404628856488325,
1421
+ "loss": 0.245,
1422
+ "step": 85500
1423
+ },
1424
+ {
1425
+ "epoch": 5.5237972894855165,
1426
+ "grad_norm": 0.9953171014785767,
1427
+ "learning_rate": 0.00043190528186353225,
1428
+ "loss": 0.2432,
1429
+ "step": 86000
1430
+ },
1431
+ {
1432
+ "epoch": 5.555912390005781,
1433
+ "grad_norm": 0.7822274565696716,
1434
+ "learning_rate": 0.00042976427516218125,
1435
+ "loss": 0.2366,
1436
+ "step": 86500
1437
+ },
1438
+ {
1439
+ "epoch": 5.588027490526045,
1440
+ "grad_norm": 0.8769294619560242,
1441
+ "learning_rate": 0.0004276232684608303,
1442
+ "loss": 0.2424,
1443
+ "step": 87000
1444
+ },
1445
+ {
1446
+ "epoch": 5.600359689125827,
1447
+ "eval_loss": 0.24115297198295593,
1448
+ "eval_runtime": 6.2957,
1449
+ "eval_samples_per_second": 79.419,
1450
+ "eval_steps_per_second": 5.083,
1451
+ "step": 87192
1452
+ },
1453
+ {
1454
+ "epoch": 5.62014259104631,
1455
+ "grad_norm": 0.9870149493217468,
1456
+ "learning_rate": 0.0004254822617594793,
1457
+ "loss": 0.2403,
1458
+ "step": 87500
1459
+ },
1460
+ {
1461
+ "epoch": 5.652257691566574,
1462
+ "grad_norm": 1.1813750267028809,
1463
+ "learning_rate": 0.000423345537071531,
1464
+ "loss": 0.24,
1465
+ "step": 88000
1466
+ },
1467
+ {
1468
+ "epoch": 5.684372792086839,
1469
+ "grad_norm": 0.9698022603988647,
1470
+ "learning_rate": 0.00042120453037018007,
1471
+ "loss": 0.2424,
1472
+ "step": 88500
1473
+ },
1474
+ {
1475
+ "epoch": 5.716487892607104,
1476
+ "grad_norm": 0.8958451151847839,
1477
+ "learning_rate": 0.00041906352366882907,
1478
+ "loss": 0.2423,
1479
+ "step": 89000
1480
+ },
1481
+ {
1482
+ "epoch": 5.748602993127369,
1483
+ "grad_norm": 0.9841606020927429,
1484
+ "learning_rate": 0.0004169225169674781,
1485
+ "loss": 0.2435,
1486
+ "step": 89500
1487
+ },
1488
+ {
1489
+ "epoch": 5.780718093647633,
1490
+ "grad_norm": 0.859775185585022,
1491
+ "learning_rate": 0.00041478579227952983,
1492
+ "loss": 0.236,
1493
+ "step": 90000
1494
+ },
1495
+ {
1496
+ "epoch": 5.800372535166035,
1497
+ "eval_loss": 0.23942877352237701,
1498
+ "eval_runtime": 6.3254,
1499
+ "eval_samples_per_second": 79.046,
1500
+ "eval_steps_per_second": 5.059,
1501
+ "step": 90306
1502
+ },
1503
+ {
1504
+ "epoch": 5.812833194167898,
1505
+ "grad_norm": 0.8131700158119202,
1506
+ "learning_rate": 0.00041264478557817883,
1507
+ "loss": 0.2359,
1508
+ "step": 90500
1509
+ },
1510
+ {
1511
+ "epoch": 5.844948294688162,
1512
+ "grad_norm": 0.9345505237579346,
1513
+ "learning_rate": 0.0004105037788768279,
1514
+ "loss": 0.2363,
1515
+ "step": 91000
1516
+ },
1517
+ {
1518
+ "epoch": 5.8770633952084275,
1519
+ "grad_norm": 0.8505440950393677,
1520
+ "learning_rate": 0.0004083627721754769,
1521
+ "loss": 0.2406,
1522
+ "step": 91500
1523
+ },
1524
+ {
1525
+ "epoch": 5.909178495728692,
1526
+ "grad_norm": 0.7482045888900757,
1527
+ "learning_rate": 0.00040622176547412594,
1528
+ "loss": 0.24,
1529
+ "step": 92000
1530
+ },
1531
+ {
1532
+ "epoch": 5.941293596248956,
1533
+ "grad_norm": 1.017319917678833,
1534
+ "learning_rate": 0.00040408504078617765,
1535
+ "loss": 0.2422,
1536
+ "step": 92500
1537
+ },
1538
+ {
1539
+ "epoch": 5.973408696769221,
1540
+ "grad_norm": 0.7721553444862366,
1541
+ "learning_rate": 0.00040194403408482676,
1542
+ "loss": 0.2378,
1543
+ "step": 93000
1544
+ },
1545
+ {
1546
+ "epoch": 6.000385381206243,
1547
+ "eval_loss": 0.2388649582862854,
1548
+ "eval_runtime": 6.2121,
1549
+ "eval_samples_per_second": 80.488,
1550
+ "eval_steps_per_second": 5.151,
1551
+ "step": 93420
1552
+ },
1553
+ {
1554
+ "epoch": 6.005523797289485,
1555
+ "grad_norm": 1.0137486457824707,
1556
+ "learning_rate": 0.0003998030273834757,
1557
+ "loss": 0.2366,
1558
+ "step": 93500
1559
+ },
1560
+ {
1561
+ "epoch": 6.03763889780975,
1562
+ "grad_norm": 1.2038958072662354,
1563
+ "learning_rate": 0.00039766202068212476,
1564
+ "loss": 0.2254,
1565
+ "step": 94000
1566
+ },
1567
+ {
1568
+ "epoch": 6.069753998330015,
1569
+ "grad_norm": 1.1078770160675049,
1570
+ "learning_rate": 0.00039552101398077376,
1571
+ "loss": 0.2302,
1572
+ "step": 94500
1573
+ },
1574
+ {
1575
+ "epoch": 6.10186909885028,
1576
+ "grad_norm": 1.011448621749878,
1577
+ "learning_rate": 0.0003933800072794228,
1578
+ "loss": 0.2268,
1579
+ "step": 95000
1580
+ },
1581
+ {
1582
+ "epoch": 6.133984199370544,
1583
+ "grad_norm": 0.975534975528717,
1584
+ "learning_rate": 0.0003912390005780718,
1585
+ "loss": 0.2273,
1586
+ "step": 95500
1587
+ },
1588
+ {
1589
+ "epoch": 6.166099299890809,
1590
+ "grad_norm": 0.9862846732139587,
1591
+ "learning_rate": 0.0003890979938767208,
1592
+ "loss": 0.229,
1593
+ "step": 96000
1594
+ },
1595
+ {
1596
+ "epoch": 6.198214400411073,
1597
+ "grad_norm": 0.901968240737915,
1598
+ "learning_rate": 0.00038695698717536986,
1599
+ "loss": 0.2286,
1600
+ "step": 96500
1601
+ },
1602
+ {
1603
+ "epoch": 6.200398227246451,
1604
+ "eval_loss": 0.23516099154949188,
1605
+ "eval_runtime": 6.0745,
1606
+ "eval_samples_per_second": 82.312,
1607
+ "eval_steps_per_second": 5.268,
1608
+ "step": 96534
1609
+ },
1610
+ {
1611
+ "epoch": 6.2303295009313375,
1612
+ "grad_norm": 0.9036768078804016,
1613
+ "learning_rate": 0.00038482454450082434,
1614
+ "loss": 0.2288,
1615
+ "step": 97000
1616
+ },
1617
+ {
1618
+ "epoch": 6.262444601451603,
1619
+ "grad_norm": 0.763529360294342,
1620
+ "learning_rate": 0.00038268353779947334,
1621
+ "loss": 0.2256,
1622
+ "step": 97500
1623
+ },
1624
+ {
1625
+ "epoch": 6.294559701971867,
1626
+ "grad_norm": 0.6492093801498413,
1627
+ "learning_rate": 0.0003805425310981224,
1628
+ "loss": 0.223,
1629
+ "step": 98000
1630
+ },
1631
+ {
1632
+ "epoch": 6.326674802492132,
1633
+ "grad_norm": 0.8010545969009399,
1634
+ "learning_rate": 0.0003784015243967714,
1635
+ "loss": 0.2303,
1636
+ "step": 98500
1637
+ },
1638
+ {
1639
+ "epoch": 6.358789903012396,
1640
+ "grad_norm": 0.9058064222335815,
1641
+ "learning_rate": 0.0003762647997088231,
1642
+ "loss": 0.2295,
1643
+ "step": 99000
1644
+ },
1645
+ {
1646
+ "epoch": 6.390905003532661,
1647
+ "grad_norm": 0.9457159638404846,
1648
+ "learning_rate": 0.00037412379300747216,
1649
+ "loss": 0.2288,
1650
+ "step": 99500
1651
+ },
1652
+ {
1653
+ "epoch": 6.4004110732866595,
1654
+ "eval_loss": 0.23331154882907867,
1655
+ "eval_runtime": 6.2121,
1656
+ "eval_samples_per_second": 80.488,
1657
+ "eval_steps_per_second": 5.151,
1658
+ "step": 99648
1659
+ },
1660
+ {
1661
+ "epoch": 6.423020104052926,
1662
+ "grad_norm": 1.2119472026824951,
1663
+ "learning_rate": 0.00037198278630612116,
1664
+ "loss": 0.2276,
1665
+ "step": 100000
1666
+ },
1667
+ {
1668
+ "epoch": 6.455135204573191,
1669
+ "grad_norm": 0.9499243497848511,
1670
+ "learning_rate": 0.00036984177960477016,
1671
+ "loss": 0.2279,
1672
+ "step": 100500
1673
+ },
1674
+ {
1675
+ "epoch": 6.487250305093455,
1676
+ "grad_norm": 0.9835514426231384,
1677
+ "learning_rate": 0.0003677007729034192,
1678
+ "loss": 0.2267,
1679
+ "step": 101000
1680
+ },
1681
+ {
1682
+ "epoch": 6.519365405613719,
1683
+ "grad_norm": 0.7667780518531799,
1684
+ "learning_rate": 0.0003655597662020682,
1685
+ "loss": 0.2254,
1686
+ "step": 101500
1687
+ },
1688
+ {
1689
+ "epoch": 6.551480506133984,
1690
+ "grad_norm": 1.0629656314849854,
1691
+ "learning_rate": 0.00036341875950071726,
1692
+ "loss": 0.224,
1693
+ "step": 102000
1694
+ },
1695
+ {
1696
+ "epoch": 6.5835956066542485,
1697
+ "grad_norm": 0.9015426635742188,
1698
+ "learning_rate": 0.00036127775279936626,
1699
+ "loss": 0.2276,
1700
+ "step": 102500
1701
+ },
1702
+ {
1703
+ "epoch": 6.600423919326867,
1704
+ "eval_loss": 0.23043328523635864,
1705
+ "eval_runtime": 6.1236,
1706
+ "eval_samples_per_second": 81.651,
1707
+ "eval_steps_per_second": 5.226,
1708
+ "step": 102762
1709
+ },
1710
+ {
1711
+ "epoch": 6.615710707174514,
1712
+ "grad_norm": 0.7688403725624084,
1713
+ "learning_rate": 0.00035913674609801526,
1714
+ "loss": 0.2232,
1715
+ "step": 103000
1716
+ },
1717
+ {
1718
+ "epoch": 6.647825807694778,
1719
+ "grad_norm": 0.9431188106536865,
1720
+ "learning_rate": 0.0003569957393966643,
1721
+ "loss": 0.2285,
1722
+ "step": 103500
1723
+ },
1724
+ {
1725
+ "epoch": 6.679940908215043,
1726
+ "grad_norm": 0.8055190443992615,
1727
+ "learning_rate": 0.00035485473269531337,
1728
+ "loss": 0.2258,
1729
+ "step": 104000
1730
+ },
1731
+ {
1732
+ "epoch": 6.712056008735307,
1733
+ "grad_norm": 1.7640315294265747,
1734
+ "learning_rate": 0.00035271372599396237,
1735
+ "loss": 0.2232,
1736
+ "step": 104500
1737
+ },
1738
+ {
1739
+ "epoch": 6.744171109255572,
1740
+ "grad_norm": 0.870721697807312,
1741
+ "learning_rate": 0.0003505727192926114,
1742
+ "loss": 0.2247,
1743
+ "step": 105000
1744
+ },
1745
+ {
1746
+ "epoch": 6.7762862097758365,
1747
+ "grad_norm": 0.7945193648338318,
1748
+ "learning_rate": 0.00034843599460466314,
1749
+ "loss": 0.2233,
1750
+ "step": 105500
1751
+ },
1752
+ {
1753
+ "epoch": 6.800436765367076,
1754
+ "eval_loss": 0.22924071550369263,
1755
+ "eval_runtime": 6.1669,
1756
+ "eval_samples_per_second": 81.078,
1757
+ "eval_steps_per_second": 5.189,
1758
+ "step": 105876
1759
+ },
1760
+ {
1761
+ "epoch": 6.808401310296102,
1762
+ "grad_norm": 0.6762036681175232,
1763
+ "learning_rate": 0.00034629498790331214,
1764
+ "loss": 0.2243,
1765
+ "step": 106000
1766
+ },
1767
+ {
1768
+ "epoch": 6.840516410816366,
1769
+ "grad_norm": 0.9264719486236572,
1770
+ "learning_rate": 0.0003441539812019612,
1771
+ "loss": 0.2226,
1772
+ "step": 106500
1773
+ },
1774
+ {
1775
+ "epoch": 6.87263151133663,
1776
+ "grad_norm": 0.9683498740196228,
1777
+ "learning_rate": 0.0003420129745006102,
1778
+ "loss": 0.2254,
1779
+ "step": 107000
1780
+ },
1781
+ {
1782
+ "epoch": 6.904746611856895,
1783
+ "grad_norm": 1.0541504621505737,
1784
+ "learning_rate": 0.0003398805318260646,
1785
+ "loss": 0.2243,
1786
+ "step": 107500
1787
+ },
1788
+ {
1789
+ "epoch": 6.936861712377159,
1790
+ "grad_norm": 0.977885901927948,
1791
+ "learning_rate": 0.00033773952512471366,
1792
+ "loss": 0.2198,
1793
+ "step": 108000
1794
+ },
1795
+ {
1796
+ "epoch": 6.9689768128974245,
1797
+ "grad_norm": 0.7070357799530029,
1798
+ "learning_rate": 0.00033559851842336266,
1799
+ "loss": 0.2247,
1800
+ "step": 108500
1801
+ },
1802
+ {
1803
+ "epoch": 7.000449611407284,
1804
+ "eval_loss": 0.22423392534255981,
1805
+ "eval_runtime": 6.0707,
1806
+ "eval_samples_per_second": 82.363,
1807
+ "eval_steps_per_second": 5.271,
1808
+ "step": 108990
1809
+ },
1810
+ {
1811
+ "epoch": 7.001091913417689,
1812
+ "grad_norm": 0.884039044380188,
1813
+ "learning_rate": 0.0003334575117220117,
1814
+ "loss": 0.2201,
1815
+ "step": 109000
1816
+ },
1817
+ {
1818
+ "epoch": 7.033207013937954,
1819
+ "grad_norm": 1.0540134906768799,
1820
+ "learning_rate": 0.0003313165050206607,
1821
+ "loss": 0.2088,
1822
+ "step": 109500
1823
+ },
1824
+ {
1825
+ "epoch": 7.065322114458218,
1826
+ "grad_norm": 0.931377112865448,
1827
+ "learning_rate": 0.0003291754983193097,
1828
+ "loss": 0.2071,
1829
+ "step": 110000
1830
+ },
1831
+ {
1832
+ "epoch": 7.097437214978483,
1833
+ "grad_norm": 0.6534168720245361,
1834
+ "learning_rate": 0.00032703449161795877,
1835
+ "loss": 0.2111,
1836
+ "step": 110500
1837
+ },
1838
+ {
1839
+ "epoch": 7.1295523154987475,
1840
+ "grad_norm": 0.8932952880859375,
1841
+ "learning_rate": 0.0003248934849166078,
1842
+ "loss": 0.2154,
1843
+ "step": 111000
1844
+ },
1845
+ {
1846
+ "epoch": 7.161667416019013,
1847
+ "grad_norm": 0.969283938407898,
1848
+ "learning_rate": 0.00032275676022865954,
1849
+ "loss": 0.2098,
1850
+ "step": 111500
1851
+ },
1852
+ {
1853
+ "epoch": 7.193782516539277,
1854
+ "grad_norm": 0.6501076817512512,
1855
+ "learning_rate": 0.0003206157535273086,
1856
+ "loss": 0.212,
1857
+ "step": 112000
1858
+ },
1859
+ {
1860
+ "epoch": 7.200462457447491,
1861
+ "eval_loss": 0.22263632714748383,
1862
+ "eval_runtime": 6.2158,
1863
+ "eval_samples_per_second": 80.441,
1864
+ "eval_steps_per_second": 5.148,
1865
+ "step": 112104
1866
+ },
1867
+ {
1868
+ "epoch": 7.225897617059541,
1869
+ "grad_norm": 0.8914813995361328,
1870
+ "learning_rate": 0.0003184747468259576,
1871
+ "loss": 0.2096,
1872
+ "step": 112500
1873
+ },
1874
+ {
1875
+ "epoch": 7.258012717579806,
1876
+ "grad_norm": 0.7064047455787659,
1877
+ "learning_rate": 0.0003163337401246066,
1878
+ "loss": 0.208,
1879
+ "step": 113000
1880
+ },
1881
+ {
1882
+ "epoch": 7.29012781810007,
1883
+ "grad_norm": 0.83128422498703,
1884
+ "learning_rate": 0.00031419273342325564,
1885
+ "loss": 0.2086,
1886
+ "step": 113500
1887
+ },
1888
+ {
1889
+ "epoch": 7.3222429186203355,
1890
+ "grad_norm": 0.9413278102874756,
1891
+ "learning_rate": 0.00031205172672190464,
1892
+ "loss": 0.211,
1893
+ "step": 114000
1894
+ },
1895
+ {
1896
+ "epoch": 7.3543580191406,
1897
+ "grad_norm": 1.0920408964157104,
1898
+ "learning_rate": 0.0003099150020339564,
1899
+ "loss": 0.2069,
1900
+ "step": 114500
1901
+ },
1902
+ {
1903
+ "epoch": 7.386473119660865,
1904
+ "grad_norm": 0.8638095259666443,
1905
+ "learning_rate": 0.0003077739953326054,
1906
+ "loss": 0.2091,
1907
+ "step": 115000
1908
+ },
1909
+ {
1910
+ "epoch": 7.4004753034877,
1911
+ "eval_loss": 0.22307626903057098,
1912
+ "eval_runtime": 6.1561,
1913
+ "eval_samples_per_second": 81.22,
1914
+ "eval_steps_per_second": 5.198,
1915
+ "step": 115218
1916
+ },
1917
+ {
1918
+ "epoch": 7.418588220181129,
1919
+ "grad_norm": 1.219592809677124,
1920
+ "learning_rate": 0.0003056329886312544,
1921
+ "loss": 0.2137,
1922
+ "step": 115500
1923
+ },
1924
+ {
1925
+ "epoch": 7.450703320701394,
1926
+ "grad_norm": 0.9182437658309937,
1927
+ "learning_rate": 0.00030349198192990346,
1928
+ "loss": 0.2115,
1929
+ "step": 116000
1930
+ },
1931
+ {
1932
+ "epoch": 7.482818421221658,
1933
+ "grad_norm": 1.1176888942718506,
1934
+ "learning_rate": 0.00030135097522855246,
1935
+ "loss": 0.2105,
1936
+ "step": 116500
1937
+ },
1938
+ {
1939
+ "epoch": 7.5149335217419235,
1940
+ "grad_norm": 0.8202816843986511,
1941
+ "learning_rate": 0.00029921425054060417,
1942
+ "loss": 0.2113,
1943
+ "step": 117000
1944
+ },
1945
+ {
1946
+ "epoch": 7.547048622262188,
1947
+ "grad_norm": 1.0972435474395752,
1948
+ "learning_rate": 0.0002970732438392532,
1949
+ "loss": 0.2084,
1950
+ "step": 117500
1951
+ },
1952
+ {
1953
+ "epoch": 7.579163722782452,
1954
+ "grad_norm": 0.8118135333061218,
1955
+ "learning_rate": 0.0002949322371379022,
1956
+ "loss": 0.2113,
1957
+ "step": 118000
1958
+ },
1959
+ {
1960
+ "epoch": 7.600488149527908,
1961
+ "eval_loss": 0.2188137173652649,
1962
+ "eval_runtime": 6.196,
1963
+ "eval_samples_per_second": 80.698,
1964
+ "eval_steps_per_second": 5.165,
1965
+ "step": 118332
1966
+ },
1967
+ {
1968
+ "epoch": 7.611278823302717,
1969
+ "grad_norm": 0.9078381061553955,
1970
+ "learning_rate": 0.0002927912304365513,
1971
+ "loss": 0.2103,
1972
+ "step": 118500
1973
+ },
1974
+ {
1975
+ "epoch": 7.643393923822981,
1976
+ "grad_norm": 0.9369151592254639,
1977
+ "learning_rate": 0.00029065450574860304,
1978
+ "loss": 0.2128,
1979
+ "step": 119000
1980
+ },
1981
+ {
1982
+ "epoch": 7.6755090243432464,
1983
+ "grad_norm": 0.7976478934288025,
1984
+ "learning_rate": 0.00028851349904725204,
1985
+ "loss": 0.2069,
1986
+ "step": 119500
1987
+ },
1988
+ {
1989
+ "epoch": 7.707624124863511,
1990
+ "grad_norm": 1.0017156600952148,
1991
+ "learning_rate": 0.00028637249234590104,
1992
+ "loss": 0.2096,
1993
+ "step": 120000
1994
+ },
1995
+ {
1996
+ "epoch": 7.739739225383776,
1997
+ "grad_norm": 0.5405673980712891,
1998
+ "learning_rate": 0.0002842314856445501,
1999
+ "loss": 0.2075,
2000
+ "step": 120500
2001
+ },
2002
+ {
2003
+ "epoch": 7.77185432590404,
2004
+ "grad_norm": 1.2480295896530151,
2005
+ "learning_rate": 0.0002820904789431991,
2006
+ "loss": 0.2086,
2007
+ "step": 121000
2008
+ },
2009
+ {
2010
+ "epoch": 7.800500995568116,
2011
+ "eval_loss": 0.21540413796901703,
2012
+ "eval_runtime": 6.1008,
2013
+ "eval_samples_per_second": 81.957,
2014
+ "eval_steps_per_second": 5.245,
2015
+ "step": 121446
2016
+ },
2017
+ {
2018
+ "epoch": 7.803969426424304,
2019
+ "grad_norm": 0.9076843857765198,
2020
+ "learning_rate": 0.0002799537542552508,
2021
+ "loss": 0.2095,
2022
+ "step": 121500
2023
+ },
2024
+ {
2025
+ "epoch": 7.836084526944569,
2026
+ "grad_norm": 1.0500370264053345,
2027
+ "learning_rate": 0.00027781274755389986,
2028
+ "loss": 0.2082,
2029
+ "step": 122000
2030
+ },
2031
+ {
2032
+ "epoch": 7.868199627464834,
2033
+ "grad_norm": 1.1205352544784546,
2034
+ "learning_rate": 0.00027567174085254886,
2035
+ "loss": 0.2091,
2036
+ "step": 122500
2037
+ },
2038
+ {
2039
+ "epoch": 7.900314727985099,
2040
+ "grad_norm": 0.6767524480819702,
2041
+ "learning_rate": 0.0002735307341511979,
2042
+ "loss": 0.2094,
2043
+ "step": 123000
2044
+ },
2045
+ {
2046
+ "epoch": 7.932429828505363,
2047
+ "grad_norm": 0.7665618062019348,
2048
+ "learning_rate": 0.0002713897274498469,
2049
+ "loss": 0.2062,
2050
+ "step": 123500
2051
+ },
2052
+ {
2053
+ "epoch": 7.964544929025628,
2054
+ "grad_norm": 0.8362904191017151,
2055
+ "learning_rate": 0.0002692530027618986,
2056
+ "loss": 0.2088,
2057
+ "step": 124000
2058
+ },
2059
+ {
2060
+ "epoch": 7.996660029545892,
2061
+ "grad_norm": 0.9630074501037598,
2062
+ "learning_rate": 0.0002671119960605477,
2063
+ "loss": 0.2045,
2064
+ "step": 124500
2065
+ },
2066
+ {
2067
+ "epoch": 8.000513841608324,
2068
+ "eval_loss": 0.21059761941432953,
2069
+ "eval_runtime": 6.0614,
2070
+ "eval_samples_per_second": 82.489,
2071
+ "eval_steps_per_second": 5.279,
2072
+ "step": 124560
2073
+ },
2074
+ {
2075
+ "epoch": 8.028775130066157,
2076
+ "grad_norm": 0.975684642791748,
2077
+ "learning_rate": 0.0002649709893591967,
2078
+ "loss": 0.1926,
2079
+ "step": 125000
2080
+ },
2081
+ {
2082
+ "epoch": 8.060890230586422,
2083
+ "grad_norm": 1.0443758964538574,
2084
+ "learning_rate": 0.00026282998265784573,
2085
+ "loss": 0.1941,
2086
+ "step": 125500
2087
+ },
2088
+ {
2089
+ "epoch": 8.093005331106687,
2090
+ "grad_norm": 0.9995091557502747,
2091
+ "learning_rate": 0.0002606889759564948,
2092
+ "loss": 0.1938,
2093
+ "step": 126000
2094
+ },
2095
+ {
2096
+ "epoch": 8.125120431626952,
2097
+ "grad_norm": 0.8805562853813171,
2098
+ "learning_rate": 0.0002585522512685465,
2099
+ "loss": 0.1919,
2100
+ "step": 126500
2101
+ },
2102
+ {
2103
+ "epoch": 8.157235532147215,
2104
+ "grad_norm": 0.8583112359046936,
2105
+ "learning_rate": 0.0002564155265805982,
2106
+ "loss": 0.1947,
2107
+ "step": 127000
2108
+ },
2109
+ {
2110
+ "epoch": 8.18935063266748,
2111
+ "grad_norm": 0.7845993638038635,
2112
+ "learning_rate": 0.00025427451987924726,
2113
+ "loss": 0.1936,
2114
+ "step": 127500
2115
+ },
2116
+ {
2117
+ "epoch": 8.200526687648532,
2118
+ "eval_loss": 0.2119937390089035,
2119
+ "eval_runtime": 6.2291,
2120
+ "eval_samples_per_second": 80.269,
2121
+ "eval_steps_per_second": 5.137,
2122
+ "step": 127674
2123
+ },
2124
+ {
2125
+ "epoch": 8.221465733187745,
2126
+ "grad_norm": 0.910618782043457,
2127
+ "learning_rate": 0.00025213351317789626,
2128
+ "loss": 0.1956,
2129
+ "step": 128000
2130
+ },
2131
+ {
2132
+ "epoch": 8.253580833708009,
2133
+ "grad_norm": 0.9305772185325623,
2134
+ "learning_rate": 0.00024999250647654526,
2135
+ "loss": 0.1959,
2136
+ "step": 128500
2137
+ },
2138
+ {
2139
+ "epoch": 8.285695934228274,
2140
+ "grad_norm": 0.9778631925582886,
2141
+ "learning_rate": 0.0002478514997751943,
2142
+ "loss": 0.1954,
2143
+ "step": 129000
2144
+ },
2145
+ {
2146
+ "epoch": 8.317811034748539,
2147
+ "grad_norm": 0.989513099193573,
2148
+ "learning_rate": 0.0002457104930738433,
2149
+ "loss": 0.1935,
2150
+ "step": 129500
2151
+ },
2152
+ {
2153
+ "epoch": 8.349926135268804,
2154
+ "grad_norm": 0.7872004508972168,
2155
+ "learning_rate": 0.00024356948637249234,
2156
+ "loss": 0.1962,
2157
+ "step": 130000
2158
+ },
2159
+ {
2160
+ "epoch": 8.382041235789067,
2161
+ "grad_norm": 0.9529395699501038,
2162
+ "learning_rate": 0.00024142847967114137,
2163
+ "loss": 0.1942,
2164
+ "step": 130500
2165
+ },
2166
+ {
2167
+ "epoch": 8.40053953368874,
2168
+ "eval_loss": 0.20523549616336823,
2169
+ "eval_runtime": 6.0951,
2170
+ "eval_samples_per_second": 82.034,
2171
+ "eval_steps_per_second": 5.25,
2172
+ "step": 130788
2173
+ },
2174
+ {
2175
+ "epoch": 8.414156336309333,
2176
+ "grad_norm": 0.8992115259170532,
2177
+ "learning_rate": 0.0002392917549831931,
2178
+ "loss": 0.1972,
2179
+ "step": 131000
2180
+ },
2181
+ {
2182
+ "epoch": 8.446271436829598,
2183
+ "grad_norm": 0.9308871030807495,
2184
+ "learning_rate": 0.0002371507482818421,
2185
+ "loss": 0.1939,
2186
+ "step": 131500
2187
+ },
2188
+ {
2189
+ "epoch": 8.478386537349863,
2190
+ "grad_norm": 0.8216313123703003,
2191
+ "learning_rate": 0.00023500974158049113,
2192
+ "loss": 0.1951,
2193
+ "step": 132000
2194
+ },
2195
+ {
2196
+ "epoch": 8.510501637870126,
2197
+ "grad_norm": 1.0425806045532227,
2198
+ "learning_rate": 0.00023286873487914021,
2199
+ "loss": 0.1932,
2200
+ "step": 132500
2201
+ },
2202
+ {
2203
+ "epoch": 8.542616738390391,
2204
+ "grad_norm": 0.7375442981719971,
2205
+ "learning_rate": 0.00023072772817778921,
2206
+ "loss": 0.1964,
2207
+ "step": 133000
2208
+ },
2209
+ {
2210
+ "epoch": 8.574731838910656,
2211
+ "grad_norm": 1.024661660194397,
2212
+ "learning_rate": 0.00022859100348984095,
2213
+ "loss": 0.194,
2214
+ "step": 133500
2215
+ },
2216
+ {
2217
+ "epoch": 8.60055237972895,
2218
+ "eval_loss": 0.20276524126529694,
2219
+ "eval_runtime": 6.1907,
2220
+ "eval_samples_per_second": 80.766,
2221
+ "eval_steps_per_second": 5.169,
2222
+ "step": 133902
2223
+ },
2224
+ {
2225
+ "epoch": 8.60684693943092,
2226
+ "grad_norm": 0.9550065398216248,
2227
+ "learning_rate": 0.00022645427880189266,
2228
+ "loss": 0.1951,
2229
+ "step": 134000
2230
+ },
2231
+ {
2232
+ "epoch": 8.638962039951185,
2233
+ "grad_norm": 0.8019956946372986,
2234
+ "learning_rate": 0.0002243132721005417,
2235
+ "loss": 0.1906,
2236
+ "step": 134500
2237
+ },
2238
+ {
2239
+ "epoch": 8.67107714047145,
2240
+ "grad_norm": 0.8859021067619324,
2241
+ "learning_rate": 0.00022217226539919072,
2242
+ "loss": 0.1931,
2243
+ "step": 135000
2244
+ },
2245
+ {
2246
+ "epoch": 8.703192240991715,
2247
+ "grad_norm": 0.8394747972488403,
2248
+ "learning_rate": 0.00022003125869783974,
2249
+ "loss": 0.1931,
2250
+ "step": 135500
2251
+ },
2252
+ {
2253
+ "epoch": 8.735307341511978,
2254
+ "grad_norm": 1.1710271835327148,
2255
+ "learning_rate": 0.00021789025199648877,
2256
+ "loss": 0.1948,
2257
+ "step": 136000
2258
+ },
2259
+ {
2260
+ "epoch": 8.767422442032244,
2261
+ "grad_norm": 0.8720309138298035,
2262
+ "learning_rate": 0.00021574924529513777,
2263
+ "loss": 0.1895,
2264
+ "step": 136500
2265
+ },
2266
+ {
2267
+ "epoch": 8.799537542552509,
2268
+ "grad_norm": 1.0000699758529663,
2269
+ "learning_rate": 0.0002136125206071895,
2270
+ "loss": 0.1971,
2271
+ "step": 137000
2272
+ },
2273
+ {
2274
+ "epoch": 8.800565225769157,
2275
+ "eval_loss": 0.20202526450157166,
2276
+ "eval_runtime": 6.0835,
2277
+ "eval_samples_per_second": 82.19,
2278
+ "eval_steps_per_second": 5.26,
2279
+ "step": 137016
2280
+ },
2281
+ {
2282
+ "epoch": 8.831652643072772,
2283
+ "grad_norm": 1.1396946907043457,
2284
+ "learning_rate": 0.00021147151390583853,
2285
+ "loss": 0.1905,
2286
+ "step": 137500
2287
+ },
2288
+ {
2289
+ "epoch": 8.863767743593037,
2290
+ "grad_norm": 1.0040215253829956,
2291
+ "learning_rate": 0.00020933050720448756,
2292
+ "loss": 0.1923,
2293
+ "step": 138000
2294
+ },
2295
+ {
2296
+ "epoch": 8.895882844113302,
2297
+ "grad_norm": 1.1115257740020752,
2298
+ "learning_rate": 0.00020718950050313656,
2299
+ "loss": 0.1876,
2300
+ "step": 138500
2301
+ },
2302
+ {
2303
+ "epoch": 8.927997944633567,
2304
+ "grad_norm": 0.7989093661308289,
2305
+ "learning_rate": 0.0002050484938017856,
2306
+ "loss": 0.1944,
2307
+ "step": 139000
2308
+ },
2309
+ {
2310
+ "epoch": 8.96011304515383,
2311
+ "grad_norm": 0.9375371932983398,
2312
+ "learning_rate": 0.00020290748710043461,
2313
+ "loss": 0.1925,
2314
+ "step": 139500
2315
+ },
2316
+ {
2317
+ "epoch": 8.992228145674096,
2318
+ "grad_norm": 0.8365890383720398,
2319
+ "learning_rate": 0.00020076648039908367,
2320
+ "loss": 0.1878,
2321
+ "step": 140000
2322
+ },
2323
+ {
2324
+ "epoch": 9.000578071809365,
2325
+ "eval_loss": 0.19777676463127136,
2326
+ "eval_runtime": 6.2129,
2327
+ "eval_samples_per_second": 80.478,
2328
+ "eval_steps_per_second": 5.151,
2329
+ "step": 140130
2330
+ },
2331
+ {
2332
+ "epoch": 9.024343246194361,
2333
+ "grad_norm": 1.0316485166549683,
2334
+ "learning_rate": 0.00019862975571113538,
2335
+ "loss": 0.1787,
2336
+ "step": 140500
2337
+ },
2338
+ {
2339
+ "epoch": 9.056458346714626,
2340
+ "grad_norm": 1.0442605018615723,
2341
+ "learning_rate": 0.0001964887490097844,
2342
+ "loss": 0.1767,
2343
+ "step": 141000
2344
+ },
2345
+ {
2346
+ "epoch": 9.08857344723489,
2347
+ "grad_norm": 0.840471088886261,
2348
+ "learning_rate": 0.00019434774230843343,
2349
+ "loss": 0.1785,
2350
+ "step": 141500
2351
+ },
2352
+ {
2353
+ "epoch": 9.120688547755154,
2354
+ "grad_norm": 0.7494236826896667,
2355
+ "learning_rate": 0.00019220673560708246,
2356
+ "loss": 0.1765,
2357
+ "step": 142000
2358
+ },
2359
+ {
2360
+ "epoch": 9.15280364827542,
2361
+ "grad_norm": 0.8309280276298523,
2362
+ "learning_rate": 0.00019006572890573149,
2363
+ "loss": 0.1803,
2364
+ "step": 142500
2365
+ },
2366
+ {
2367
+ "epoch": 9.184918748795683,
2368
+ "grad_norm": 0.9183410406112671,
2369
+ "learning_rate": 0.0001879247222043805,
2370
+ "loss": 0.1787,
2371
+ "step": 143000
2372
+ },
2373
+ {
2374
+ "epoch": 9.200590917849572,
2375
+ "eval_loss": 0.19720293581485748,
2376
+ "eval_runtime": 6.1699,
2377
+ "eval_samples_per_second": 81.039,
2378
+ "eval_steps_per_second": 5.186,
2379
+ "step": 143244
2380
+ },
2381
+ {
2382
+ "epoch": 9.217033849315948,
2383
+ "grad_norm": 0.7478104829788208,
2384
+ "learning_rate": 0.00018578371550302954,
2385
+ "loss": 0.1769,
2386
+ "step": 143500
2387
+ },
2388
+ {
2389
+ "epoch": 9.249148949836213,
2390
+ "grad_norm": 1.0826218128204346,
2391
+ "learning_rate": 0.00018364270880167857,
2392
+ "loss": 0.1799,
2393
+ "step": 144000
2394
+ },
2395
+ {
2396
+ "epoch": 9.281264050356478,
2397
+ "grad_norm": 0.8286280632019043,
2398
+ "learning_rate": 0.00018150170210032757,
2399
+ "loss": 0.1758,
2400
+ "step": 144500
2401
+ },
2402
+ {
2403
+ "epoch": 9.313379150876742,
2404
+ "grad_norm": 1.1499660015106201,
2405
+ "learning_rate": 0.00017936497741237933,
2406
+ "loss": 0.1772,
2407
+ "step": 145000
2408
+ },
2409
+ {
2410
+ "epoch": 9.345494251397007,
2411
+ "grad_norm": 0.958096981048584,
2412
+ "learning_rate": 0.00017722397071102833,
2413
+ "loss": 0.1796,
2414
+ "step": 145500
2415
+ },
2416
+ {
2417
+ "epoch": 9.377609351917272,
2418
+ "grad_norm": 1.072838306427002,
2419
+ "learning_rate": 0.00017508296400967736,
2420
+ "loss": 0.1788,
2421
+ "step": 146000
2422
+ },
2423
+ {
2424
+ "epoch": 9.40060376388978,
2425
+ "eval_loss": 0.1962149292230606,
2426
+ "eval_runtime": 6.3241,
2427
+ "eval_samples_per_second": 79.063,
2428
+ "eval_steps_per_second": 5.06,
2429
+ "step": 146358
2430
+ },
2431
+ {
2432
+ "epoch": 9.409724452437537,
2433
+ "grad_norm": 0.8851492404937744,
2434
+ "learning_rate": 0.00017294195730832638,
2435
+ "loss": 0.1792,
2436
+ "step": 146500
2437
+ },
2438
+ {
2439
+ "epoch": 9.4418395529578,
2440
+ "grad_norm": 0.8513966202735901,
2441
+ "learning_rate": 0.00017080523262037812,
2442
+ "loss": 0.1795,
2443
+ "step": 147000
2444
+ },
2445
+ {
2446
+ "epoch": 9.473954653478065,
2447
+ "grad_norm": 0.9359699487686157,
2448
+ "learning_rate": 0.00016866422591902712,
2449
+ "loss": 0.177,
2450
+ "step": 147500
2451
+ },
2452
+ {
2453
+ "epoch": 9.50606975399833,
2454
+ "grad_norm": 0.8794530034065247,
2455
+ "learning_rate": 0.00016652321921767615,
2456
+ "loss": 0.1796,
2457
+ "step": 148000
2458
+ },
2459
+ {
2460
+ "epoch": 9.538184854518594,
2461
+ "grad_norm": 0.7300311326980591,
2462
+ "learning_rate": 0.00016438221251632517,
2463
+ "loss": 0.1797,
2464
+ "step": 148500
2465
+ },
2466
+ {
2467
+ "epoch": 9.570299955038859,
2468
+ "grad_norm": 1.0327720642089844,
2469
+ "learning_rate": 0.00016224120581497423,
2470
+ "loss": 0.1768,
2471
+ "step": 149000
2472
+ },
2473
+ {
2474
+ "epoch": 9.60061660992999,
2475
+ "eval_loss": 0.19196225702762604,
2476
+ "eval_runtime": 6.229,
2477
+ "eval_samples_per_second": 80.27,
2478
+ "eval_steps_per_second": 5.137,
2479
+ "step": 149472
2480
+ },
2481
+ {
2482
+ "epoch": 9.602415055559124,
2483
+ "grad_norm": 0.8427177667617798,
2484
+ "learning_rate": 0.00016010019911362323,
2485
+ "loss": 0.1771,
2486
+ "step": 149500
2487
+ },
2488
+ {
2489
+ "epoch": 9.63453015607939,
2490
+ "grad_norm": 0.7671401500701904,
2491
+ "learning_rate": 0.00015796347442567497,
2492
+ "loss": 0.1757,
2493
+ "step": 150000
2494
+ },
2495
+ {
2496
+ "epoch": 9.666645256599653,
2497
+ "grad_norm": 0.9293390512466431,
2498
+ "learning_rate": 0.000155822467724324,
2499
+ "loss": 0.1803,
2500
+ "step": 150500
2501
+ },
2502
+ {
2503
+ "epoch": 9.698760357119918,
2504
+ "grad_norm": 0.9376311898231506,
2505
+ "learning_rate": 0.000153681461022973,
2506
+ "loss": 0.1755,
2507
+ "step": 151000
2508
+ },
2509
+ {
2510
+ "epoch": 9.730875457640183,
2511
+ "grad_norm": 0.7565805912017822,
2512
+ "learning_rate": 0.00015154045432162202,
2513
+ "loss": 0.1761,
2514
+ "step": 151500
2515
+ },
2516
+ {
2517
+ "epoch": 9.762990558160446,
2518
+ "grad_norm": 0.7578882575035095,
2519
+ "learning_rate": 0.00014939944762027105,
2520
+ "loss": 0.177,
2521
+ "step": 152000
2522
+ },
2523
+ {
2524
+ "epoch": 9.795105658680711,
2525
+ "grad_norm": 1.100631594657898,
2526
+ "learning_rate": 0.00014726272293232278,
2527
+ "loss": 0.1741,
2528
+ "step": 152500
2529
+ },
2530
+ {
2531
+ "epoch": 9.800629455970197,
2532
+ "eval_loss": 0.18931564688682556,
2533
+ "eval_runtime": 6.2199,
2534
+ "eval_samples_per_second": 80.388,
2535
+ "eval_steps_per_second": 5.145,
2536
+ "step": 152586
2537
+ },
2538
+ {
2539
+ "epoch": 9.827220759200976,
2540
+ "grad_norm": 1.0236527919769287,
2541
+ "learning_rate": 0.0001451217162309718,
2542
+ "loss": 0.1764,
2543
+ "step": 153000
2544
+ },
2545
+ {
2546
+ "epoch": 9.859335859721241,
2547
+ "grad_norm": 0.955736517906189,
2548
+ "learning_rate": 0.00014298070952962084,
2549
+ "loss": 0.1732,
2550
+ "step": 153500
2551
+ },
2552
+ {
2553
+ "epoch": 9.891450960241505,
2554
+ "grad_norm": 0.9530282020568848,
2555
+ "learning_rate": 0.00014083970282826986,
2556
+ "loss": 0.1751,
2557
+ "step": 154000
2558
+ },
2559
+ {
2560
+ "epoch": 9.92356606076177,
2561
+ "grad_norm": 1.0680643320083618,
2562
+ "learning_rate": 0.0001386986961269189,
2563
+ "loss": 0.1763,
2564
+ "step": 154500
2565
+ },
2566
+ {
2567
+ "epoch": 9.955681161282035,
2568
+ "grad_norm": 1.0151365995407104,
2569
+ "learning_rate": 0.0001365576894255679,
2570
+ "loss": 0.1742,
2571
+ "step": 155000
2572
+ },
2573
+ {
2574
+ "epoch": 9.9877962618023,
2575
+ "grad_norm": 0.9112036824226379,
2576
+ "learning_rate": 0.00013442096473761963,
2577
+ "loss": 0.172,
2578
+ "step": 155500
2579
+ },
2580
+ {
2581
+ "epoch": 10.000642302010405,
2582
+ "eval_loss": 0.18885616958141327,
2583
+ "eval_runtime": 6.0944,
2584
+ "eval_samples_per_second": 82.043,
2585
+ "eval_steps_per_second": 5.251,
2586
+ "step": 155700
2587
+ },
2588
+ {
2589
+ "epoch": 10.019911362322564,
2590
+ "grad_norm": 0.7300705909729004,
2591
+ "learning_rate": 0.00013228424004967137,
2592
+ "loss": 0.1665,
2593
+ "step": 156000
2594
+ },
2595
+ {
2596
+ "epoch": 10.052026462842829,
2597
+ "grad_norm": 0.8737722635269165,
2598
+ "learning_rate": 0.0001301432333483204,
2599
+ "loss": 0.1602,
2600
+ "step": 156500
2601
+ },
2602
+ {
2603
+ "epoch": 10.084141563363094,
2604
+ "grad_norm": 0.8779458999633789,
2605
+ "learning_rate": 0.00012800222664696942,
2606
+ "loss": 0.1589,
2607
+ "step": 157000
2608
+ },
2609
+ {
2610
+ "epoch": 10.116256663883357,
2611
+ "grad_norm": 0.8363326191902161,
2612
+ "learning_rate": 0.00012586121994561845,
2613
+ "loss": 0.1623,
2614
+ "step": 157500
2615
+ },
2616
+ {
2617
+ "epoch": 10.148371764403622,
2618
+ "grad_norm": 0.9299518465995789,
2619
+ "learning_rate": 0.00012372021324426745,
2620
+ "loss": 0.1632,
2621
+ "step": 158000
2622
+ },
2623
+ {
2624
+ "epoch": 10.180486864923887,
2625
+ "grad_norm": 0.5360029339790344,
2626
+ "learning_rate": 0.00012157920654291647,
2627
+ "loss": 0.1624,
2628
+ "step": 158500
2629
+ },
2630
+ {
2631
+ "epoch": 10.200655148050613,
2632
+ "eval_loss": 0.1880752295255661,
2633
+ "eval_runtime": 6.3588,
2634
+ "eval_samples_per_second": 78.632,
2635
+ "eval_steps_per_second": 5.032,
2636
+ "step": 158814
2637
+ },
2638
+ {
2639
+ "epoch": 10.212601965444152,
2640
+ "grad_norm": 0.9278386235237122,
2641
+ "learning_rate": 0.0001194381998415655,
2642
+ "loss": 0.1618,
2643
+ "step": 159000
2644
+ },
2645
+ {
2646
+ "epoch": 10.244717065964416,
2647
+ "grad_norm": 1.0586360692977905,
2648
+ "learning_rate": 0.00011729719314021454,
2649
+ "loss": 0.1639,
2650
+ "step": 159500
2651
+ },
2652
+ {
2653
+ "epoch": 10.276832166484681,
2654
+ "grad_norm": 0.8897846341133118,
2655
+ "learning_rate": 0.00011515618643886357,
2656
+ "loss": 0.1606,
2657
+ "step": 160000
2658
+ },
2659
+ {
2660
+ "epoch": 10.308947267004946,
2661
+ "grad_norm": 0.9743841290473938,
2662
+ "learning_rate": 0.00011301946175091529,
2663
+ "loss": 0.1638,
2664
+ "step": 160500
2665
+ },
2666
+ {
2667
+ "epoch": 10.341062367525211,
2668
+ "grad_norm": 0.8805962204933167,
2669
+ "learning_rate": 0.0001108784550495643,
2670
+ "loss": 0.1623,
2671
+ "step": 161000
2672
+ },
2673
+ {
2674
+ "epoch": 10.373177468045474,
2675
+ "grad_norm": 0.7496643662452698,
2676
+ "learning_rate": 0.00010873744834821333,
2677
+ "loss": 0.1624,
2678
+ "step": 161500
2679
+ },
2680
+ {
2681
+ "epoch": 10.40066799409082,
2682
+ "eval_loss": 0.18739080429077148,
2683
+ "eval_runtime": 6.1084,
2684
+ "eval_samples_per_second": 81.855,
2685
+ "eval_steps_per_second": 5.239,
2686
+ "step": 161928
2687
+ },
2688
+ {
2689
+ "epoch": 10.40529256856574,
2690
+ "grad_norm": 0.9460242986679077,
2691
+ "learning_rate": 0.00010659644164686236,
2692
+ "loss": 0.1605,
2693
+ "step": 162000
2694
+ },
2695
+ {
2696
+ "epoch": 10.437407669086005,
2697
+ "grad_norm": 0.9972020983695984,
2698
+ "learning_rate": 0.00010445543494551137,
2699
+ "loss": 0.1625,
2700
+ "step": 162500
2701
+ },
2702
+ {
2703
+ "epoch": 10.469522769606268,
2704
+ "grad_norm": 0.8950929045677185,
2705
+ "learning_rate": 0.00010231871025756312,
2706
+ "loss": 0.1571,
2707
+ "step": 163000
2708
+ },
2709
+ {
2710
+ "epoch": 10.501637870126533,
2711
+ "grad_norm": 0.7049261331558228,
2712
+ "learning_rate": 0.00010017770355621215,
2713
+ "loss": 0.161,
2714
+ "step": 163500
2715
+ },
2716
+ {
2717
+ "epoch": 10.533752970646798,
2718
+ "grad_norm": 0.9523254036903381,
2719
+ "learning_rate": 9.803669685486116e-05,
2720
+ "loss": 0.1611,
2721
+ "step": 164000
2722
+ },
2723
+ {
2724
+ "epoch": 10.565868071167063,
2725
+ "grad_norm": 0.7138133645057678,
2726
+ "learning_rate": 9.589569015351019e-05,
2727
+ "loss": 0.158,
2728
+ "step": 164500
2729
+ },
2730
+ {
2731
+ "epoch": 10.597983171687327,
2732
+ "grad_norm": 1.3633294105529785,
2733
+ "learning_rate": 9.37546834521592e-05,
2734
+ "loss": 0.1603,
2735
+ "step": 165000
2736
+ },
2737
+ {
2738
+ "epoch": 10.60068084013103,
2739
+ "eval_loss": 0.18280959129333496,
2740
+ "eval_runtime": 6.1026,
2741
+ "eval_samples_per_second": 81.933,
2742
+ "eval_steps_per_second": 5.244,
2743
+ "step": 165042
2744
+ },
2745
+ {
2746
+ "epoch": 10.630098272207592,
2747
+ "grad_norm": 1.1000264883041382,
2748
+ "learning_rate": 9.161367675080823e-05,
2749
+ "loss": 0.158,
2750
+ "step": 165500
2751
+ },
2752
+ {
2753
+ "epoch": 10.662213372727857,
2754
+ "grad_norm": 0.8271787166595459,
2755
+ "learning_rate": 8.947267004945726e-05,
2756
+ "loss": 0.1581,
2757
+ "step": 166000
2758
+ },
2759
+ {
2760
+ "epoch": 10.694328473248122,
2761
+ "grad_norm": 1.174453854560852,
2762
+ "learning_rate": 8.733594536150898e-05,
2763
+ "loss": 0.1618,
2764
+ "step": 166500
2765
+ },
2766
+ {
2767
+ "epoch": 10.726443573768385,
2768
+ "grad_norm": 1.0474823713302612,
2769
+ "learning_rate": 8.519493866015801e-05,
2770
+ "loss": 0.157,
2771
+ "step": 167000
2772
+ },
2773
+ {
2774
+ "epoch": 10.75855867428865,
2775
+ "grad_norm": 0.7538208961486816,
2776
+ "learning_rate": 8.305393195880703e-05,
2777
+ "loss": 0.1567,
2778
+ "step": 167500
2779
+ },
2780
+ {
2781
+ "epoch": 10.790673774808916,
2782
+ "grad_norm": 1.0399278402328491,
2783
+ "learning_rate": 8.091292525745606e-05,
2784
+ "loss": 0.1578,
2785
+ "step": 168000
2786
+ },
2787
+ {
2788
+ "epoch": 10.800693686171238,
2789
+ "eval_loss": 0.17910771071910858,
2790
+ "eval_runtime": 6.9328,
2791
+ "eval_samples_per_second": 72.121,
2792
+ "eval_steps_per_second": 4.616,
2793
+ "step": 168156
2794
+ },
2795
+ {
2796
+ "epoch": 10.822788875329179,
2797
+ "grad_norm": 0.8242144584655762,
2798
+ "learning_rate": 7.877620056950779e-05,
2799
+ "loss": 0.1641,
2800
+ "step": 168500
2801
+ },
2802
+ {
2803
+ "epoch": 10.854903975849444,
2804
+ "grad_norm": 0.8726205229759216,
2805
+ "learning_rate": 7.663519386815681e-05,
2806
+ "loss": 0.1576,
2807
+ "step": 169000
2808
+ },
2809
+ {
2810
+ "epoch": 10.88701907636971,
2811
+ "grad_norm": 0.8853970170021057,
2812
+ "learning_rate": 7.449418716680584e-05,
2813
+ "loss": 0.1603,
2814
+ "step": 169500
2815
+ },
2816
+ {
2817
+ "epoch": 10.919134176889974,
2818
+ "grad_norm": 0.8636655211448669,
2819
+ "learning_rate": 7.235318046545487e-05,
2820
+ "loss": 0.159,
2821
+ "step": 170000
2822
+ },
2823
+ {
2824
+ "epoch": 10.951249277410238,
2825
+ "grad_norm": 0.8048428297042847,
2826
+ "learning_rate": 7.021217376410388e-05,
2827
+ "loss": 0.1575,
2828
+ "step": 170500
2829
+ },
2830
+ {
2831
+ "epoch": 10.983364377930503,
2832
+ "grad_norm": 0.8899773955345154,
2833
+ "learning_rate": 6.80754490761556e-05,
2834
+ "loss": 0.1547,
2835
+ "step": 171000
2836
+ }
2837
+ ],
2838
+ "logging_steps": 500,
2839
+ "max_steps": 186828,
2840
+ "num_input_tokens_seen": 0,
2841
+ "num_train_epochs": 12,
2842
+ "save_steps": 500,
2843
+ "stateful_callbacks": {
2844
+ "TrainerControl": {
2845
+ "args": {
2846
+ "should_epoch_stop": false,
2847
+ "should_evaluate": false,
2848
+ "should_log": false,
2849
+ "should_save": true,
2850
+ "should_training_stop": false
2851
+ },
2852
+ "attributes": {}
2853
+ }
2854
+ },
2855
+ "total_flos": 1.1515485351896416e+19,
2856
+ "train_batch_size": 2,
2857
+ "trial_name": null,
2858
+ "trial_params": null
2859
+ }
checkpoint-171259/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:892edd3338e5c2cbc20ea9ef24acea77922058f7e35a99445dd322996489f4e7
3
+ size 5496