Model parameters: d_model 1792 ffw_size 7168 kv_size 128 n_heads 14 n_layers 26 Megatron-DeepSpeed/pretrain_gpt.py --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --num-layers 26 --hidden-size 1792 --num-attention-heads 14 --kv-channels 128 --ffn-hidden-size 7168 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 4 --global-batch-size 256 --train-samples 740_269 --vocab-file gpt2/vocab.json --merge-file gpt2/merges.txt --loss-scale 12 --clip-grad 1.0 --kill-switch-path kill-switch-1b1oscar --bf16 --optimizer adam --adam-beta1 0.9 --adam-beta2 0.999 --adam-eps 1e-8 --lr 2e-4 --min-lr 2e-5 --lr-decay-style cosine --lr-decay-samples 740_269 --lr-warmup-samples 7403 --clip-grad 1.0 --weight-decay 1e-1 --log-interval 10 --save-interval 1000 --eval-interval 1000 --eval-iters 1 --tensorboard-dir tensorboard_1b1oscar --tensorboard-queue-size 5 --log-timers-to-tensorboard --log-batch-size-to-tensorboard --log-validation-ppl-to-tensorboard --save checkpoints_1b1oscar --load checkpoints_1b1oscar --data-path /scratch/project_462000119/data/oscar_megatron/gpt2tok_oscar_text_document --data-impl mmap --split 949,50,1 --deepspeed --deepspeed_config ds_configs/2085640.json --zero-stage 0 START 2085640: Tue Nov 29 17:48:36 EET 2022 0: 0: 0: ======================= ROCm System Management Interface ======================= 0: ================================= Concise Info ================================= 0: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 0: 0 42.0c 90.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 1 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: 2 40.0c 97.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 3 37.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: 4 41.0c 90.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 5 42.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: 6 43.0c 88.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 7 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: ================================================================================ 0: ============================= End of ROCm SMI Log ============================== 5: 5: 5: ======================= ROCm System Management Interface ======================= 5: ================================= Concise Info ================================= 5: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 5: 0 47.0c 95.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 5: 1 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 5: 2 42.0c 91.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 5: 3 44.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 5: 4 41.0c 94.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 5: 5 42.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 5: 6 39.0c 93.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 5: 7 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 5: ================================================================================ 5: ============================= End of ROCm SMI Log ============================== 4: 4: 4: ======================= ROCm System Management Interface ======================= 4: ================================= Concise Info ================================= 4: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 4: 0 42.0c 92.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 4: 1 44.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 4: 2 41.0c 91.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 4: 3 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 4: 4 42.0c 92.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 4: 5 42.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 4: 6 36.0c 98.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 4: 7 42.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 4: ================================================================================ 4: ============================= End of ROCm SMI Log ============================== 3: 3: 3: ======================= ROCm System Management Interface ======================= 3: ================================= Concise Info ================================= 3: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 3: 0 41.0c 86.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 3: 1 47.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 3: 2 36.0c 93.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 3: 3 48.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 3: 4 40.0c 95.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 3: 5 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 3: 6 42.0c 92.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 3: 7 41.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 3: ================================================================================ 3: ============================= End of ROCm SMI Log ============================== 1: 1: 1: ======================= ROCm System Management Interface ======================= 1: ================================= Concise Info ================================= 1: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 1: 0 49.0c 85.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 1 48.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: 2 41.0c 90.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 3 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: 4 42.0c 90.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 5 44.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: 6 42.0c 95.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 7 47.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: ================================================================================ 1: ============================= End of ROCm SMI Log ============================== 2: 2: 2: ======================= ROCm System Management Interface ======================= 2: ================================= Concise Info ================================= 2: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 2: 0 43.0c 93.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 2: 1 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 2: 2 43.0c 91.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 2: 3 44.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 2: 4 43.0c 96.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 2: 5 44.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 2: 6 44.0c 95.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 2: 7 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 2: ================================================================================ 2: ============================= End of ROCm SMI Log ============================== 7: 7: 7: ======================= ROCm System Management Interface ======================= 7: ================================= Concise Info ================================= 7: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 7: 0 43.0c 84.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 7: 1 52.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 7: 2 42.0c 99.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 7: 3 47.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 7: 4 45.0c 85.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 7: 5 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 7: 6 41.0c 95.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 7: 7 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 7: ================================================================================ 7: ============================= End of ROCm SMI Log ============================== 6: 6: 6: ======================= ROCm System Management Interface ======================= 6: ================================= Concise Info ================================= 6: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 6: 0 40.0c 90.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 6: 1 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 6: 2 39.0c 85.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 6: 3 44.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 6: 4 39.0c 88.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 6: 5 47.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 6: 6 41.0c 89.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 6: 7 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 6: ================================================================================ 6: ============================= End of ROCm SMI Log ============================== 4: Launching on nid005020 (4/8), master nid005016 port 9999, GPUs 8, CUDA: True 6: Launching on nid005022 (6/8), master nid005016 port 9999, GPUs 8, CUDA: True 7: Launching on nid005023 (7/8), master nid005016 port 9999, GPUs 8, CUDA: True 3: Launching on nid005019 (3/8), master nid005016 port 9999, GPUs 8, CUDA: True 0: Launching on nid005016 (0/8), master nid005016 port 9999, GPUs 8, CUDA: True 2: Launching on nid005018 (2/8), master nid005016 port 9999, GPUs 8, CUDA: True 5: Launching on nid005021 (5/8), master nid005016 port 9999, GPUs 8, CUDA: True 1: Launching on nid005017 (1/8), master nid005016 port 9999, GPUs 8, CUDA: True 0: using world size: 64, data-parallel-size: 64, tensor-model-parallel size: 1, pipeline-model-parallel size: 1 0: accumulate and all-reduce gradients in fp32 for bfloat16 data type. 0: using torch.bfloat16 for parameters ... 0: ------------------------ arguments ------------------------ 0: abort_on_unmet_fused_kernel_constraints ......... False 0: accumulate_allreduce_grads_in_fp32 .............. True 0: adam_beta1 ...................................... 0.9 0: adam_beta2 ...................................... 0.999 0: adam_eps ........................................ 1e-08 0: adlr_autoresume ................................. False 0: adlr_autoresume_interval ........................ 1000 0: apply_query_key_layer_scaling ................... True 0: apply_residual_connection_post_layernorm ........ False 0: attention_dropout ............................... 0.1 0: attention_softmax_in_fp32 ....................... False 0: bert_binary_head ................................ True 0: bert_load ....................................... None 0: bf16 ............................................ True 0: bias_dropout_fusion ............................. True 0: bias_gelu_fusion ................................ True 0: biencoder_projection_dim ........................ 0 0: biencoder_shared_query_context_model ............ False 0: block_data_path ................................. None 0: checkpoint_activations .......................... False 0: checkpoint_in_cpu ............................... False 0: checkpoint_num_layers ........................... 1 0: clip_grad ....................................... 1.0 0: codecarbon_dir .................................. None 0: consumed_train_samples .......................... 0 0: consumed_train_tokens ........................... 0 0: consumed_valid_samples .......................... 0 0: contigious_checkpointing ........................ False 0: cpu_optimizer ................................... False 0: cpu_torch_adam .................................. False 0: curriculum_learning ............................. False 0: data_impl ....................................... mmap 0: data_parallel_size .............................. 64 0: data_path ....................................... ['/scratch/project_462000119/data/oscar_megatron/gpt2tok_oscar_text_document'] 0: dataloader_type ................................. single 0: DDP_impl ........................................ local 0: decoder_seq_length .............................. None 0: deepscale ....................................... False 0: deepscale_config ................................ None 0: deepspeed ....................................... True 0: deepspeed_activation_checkpointing .............. False 0: deepspeed_config ................................ ds_configs/2085640.json 0: deepspeed_mpi ................................... False 0: distribute_checkpointed_activations ............. False 0: distributed_backend ............................. nccl 0: embed_layernorm ................................. False 0: embedding_path .................................. None 0: encoder_seq_length .............................. 2048 0: eod_mask_loss ................................... False 0: eval_interval ................................... 1000 0: eval_iters ...................................... 1 0: eval_only ....................................... None 0: evidence_data_path .............................. None 0: exit_duration_in_mins ........................... None 0: exit_interval ................................... None 0: ffn_hidden_size ................................. 7168 0: finetune ........................................ False 0: fp16 ............................................ False 0: fp16_lm_cross_entropy ........................... False 0: fp32_residual_connection ........................ False 0: gigaflos_no_embeds .............................. 0 0: global_batch_size ............................... 256 0: glu_activation .................................. None 0: hidden_dropout .................................. 0.1 0: hidden_size ..................................... 1792 0: hysteresis ...................................... 2 0: ict_head_size ................................... None 0: ict_load ........................................ None 0: img_dim ......................................... 224 0: indexer_batch_size .............................. 128 0: indexer_log_interval ............................ 1000 0: inference ....................................... False 0: init_method_std ................................. 0.02 0: init_method_xavier_uniform ...................... False 0: initial_loss_scale .............................. 4294967296 0: kill_switch_path ................................ kill-switch-1b1oscar 0: kv_channels ..................................... 128 0: layer_norm_fusion ............................... True 0: layernorm_epsilon ............................... 1e-05 0: lazy_mpu_init ................................... None 0: load ............................................ checkpoints_1b1oscar 0: local_rank ...................................... None 0: log_batch_size_to_tensorboard ................... True 0: log_interval .................................... 10 0: log_learning_rate_to_tensorboard ................ True 0: log_level ....................................... None 0: log_level_replica ............................... None 0: log_loss_scale_to_tensorboard ................... True 0: log_num_zeros_in_grad ........................... False 0: log_params_norm ................................. False 0: log_path ........................................ None 0: log_timers_to_tensorboard ....................... True 0: log_validation_ppl_to_tensorboard ............... True 0: loss_on_targets_only ............................ False 0: loss_scale ...................................... 12.0 0: loss_scale_window ............................... 1000 0: lr .............................................. 0.0002 0: lr_decay_iters .................................. None 0: lr_decay_samples ................................ 740269 0: lr_decay_style .................................. cosine 0: lr_decay_tokens ................................. None 0: lr_warmup_fraction .............................. None 0: lr_warmup_iters ................................. 0 0: lr_warmup_samples ............................... 7403 0: make_vocab_size_divisible_by .................... 128 0: mask_prob ....................................... 0.15 0: masked_softmax_fusion ........................... True 0: max_position_embeddings ......................... 2048 0: mean_noise_span_length .......................... None 0: memory_centric_tiled_linear ..................... False 0: merge_file ...................................... gpt2/merges.txt 0: micro_batch_size ................................ 4 0: min_loss_scale .................................. 1.0 0: min_lr .......................................... 2e-05 0: mmap_warmup ..................................... False 0: no_load_optim ................................... None 0: no_load_rng ..................................... None 0: no_save_optim ................................... None 0: no_save_rng ..................................... None 0: noise_density ................................... None 0: num_attention_heads ............................. 14 0: num_channels .................................... 3 0: num_classes ..................................... 1000 0: num_layers ...................................... 26 0: num_layers_per_virtual_pipeline_stage ........... None 0: num_workers ..................................... 2 0: onnx_safe ....................................... None 0: openai_gelu ..................................... False 0: optimizer ....................................... adam 0: optimizer_fusion ................................ True 0: override_lr_scheduler ........................... False 0: pad_vocab_size_to ............................... None 0: params_dtype .................................... torch.bfloat16 0: partition_activations ........................... False 0: patch_dim ....................................... 16 0: pipeline_model_parallel_size .................... 1 0: position_embedding_type ......................... PositionEmbeddingType.absolute 0: pp_partition_method ............................. None 0: profile_backward ................................ False 0: query_in_block_prob ............................. 0.1 0: rampup_batch_size ............................... None 0: rank ............................................ 0 0: remote_device ................................... none 0: reset_attention_mask ............................ False 0: reset_position_ids .............................. False 0: retriever_report_topk_accuracies ................ [] 0: retriever_score_scaling ......................... False 0: retriever_seq_length ............................ 256 0: reweight_loss_based_on_position_frequency ....... False 0: sample_rate ..................................... 1.0 0: save ............................................ checkpoints_1b1oscar 0: save_interval ................................... 1000 0: scatter_gather_tensors_in_pipeline .............. True 0: scattered_embeddings ............................ False 0: seed ............................................ 1234 0: seq_length ...................................... 2048 0: sgd_momentum .................................... 0.9 0: short_seq_prob .................................. 0.1 0: skip_train_iteration_range ...................... None 0: split ........................................... 949,50,1 0: split_transformers .............................. False 0: sync_tp_duplicated_parameters ................... False 0: synchronize_each_layer .......................... False 0: tensor_model_parallel_size ...................... 1 0: tensorboard_dir ................................. tensorboard_1b1oscar 0: tensorboard_log_interval ........................ 1 0: tensorboard_queue_size .......................... 5 0: test_weighted_split_names ....................... None 0: test_weighted_split_paths ....................... None 0: test_weighted_split_paths_path .................. None 0: test_weighted_split_splits ...................... None 0: test_weighted_split_weights ..................... None 0: tile_factor ..................................... 1 0: titles_data_path ................................ None 0: tokenizer_name_or_path .......................... None 0: tokenizer_type .................................. GPT2BPETokenizer 0: train_iters ..................................... None 0: train_samples ................................... 740269 0: train_tokens .................................... None 0: train_weighted_split_paths ...................... None 0: train_weighted_split_paths_path ................. None 0: universal_checkpoint ............................ False 0: use_bnb_optimizer ............................... False 0: use_checkpoint_lr_scheduler ..................... False 0: use_contiguous_buffers_in_ddp ................... True 0: use_cpu_initialization .......................... None 0: use_one_sent_docs ............................... False 0: use_pin_memory .................................. False 0: valid_num_workers ............................... 2 0: valid_weighted_split_names ...................... None 0: valid_weighted_split_paths ...................... None 0: valid_weighted_split_paths_path ................. None 0: valid_weighted_split_splits ..................... None 0: valid_weighted_split_weights .................... None 0: virtual_pipeline_model_parallel_size ............ None 0: vocab_extra_ids ................................. 0 0: vocab_file ...................................... gpt2/vocab.json 0: weight_decay .................................... 0.1 0: world_size ...................................... 64 0: zero_allgather_bucket_size ...................... 0.0 0: zero_contigious_gradients ....................... False 0: zero_reduce_bucket_size ......................... 0.0 0: zero_reduce_scatter ............................. False 0: zero_stage ...................................... 0 0: -------------------- end of arguments --------------------- 0: setting number of micro-batches to constant 1 0: > building GPT2BPETokenizer tokenizer ... 0: > padded vocab (size: 50257) with 47 dummy tokens (new size: 50304) 0: DeepSpeed general environment info: 0: torch install path ............... ['/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/torch'] 0: torch version .................... 1.13.0+rocm5.2 0: torch cuda version ............... None 0: torch hip version ................ 5.2.21151-afdc89f8 0: nvcc version ..................... None 0: deepspeed install path ........... ['/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/deepspeed'] 0: deepspeed info ................... 0.7.5, unknown, unknown 0: deepspeed wheel compiled w. ...... torch 1.13, hip 5.1 0: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** 0: > initializing torch distributed ... 0: [2022-11-29 17:50:11,092] [INFO] [comm.py:633:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl 7: > setting tensorboard ... 0: > initializing tensor model parallel with size 1 0: > initializing pipeline model parallel with size 1 0: > setting random seeds to 1234 ... 0: > initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234 0: > compiling dataset index builder ... 0: make: Entering directory '/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/data' 0: make: Nothing to be done for 'default'. 0: make: Leaving directory '/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/data' 0: >>> done with dataset index builder. Compilation time: 0.091 seconds 0: > compiling and loading fused kernels ... 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.cpp [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_cuda.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.hip [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] 0: Total number of unsupported CUDA function calls: 0 0: 0: 0: Total number of replaced kernel launches: 87 0: ninja: no work to do. 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.cpp [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_cuda.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.hip [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] 0: Total number of unsupported CUDA function calls: 0 0: 0: 0: Total number of replaced kernel launches: 63 0: ninja: no work to do. 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda.cpp [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda_kernel.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_hip_kernel.hip [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] 0: Total number of unsupported CUDA function calls: 0 0: 0: 0: Total number of replaced kernel launches: 67 0: ninja: no work to do. 0: >>> done with compiling and loading fused kernels. Compilation time: 21.171 seconds 0: time to initialize megatron (seconds): 77.428 0: [after megatron is initialized] datetime: 2022-11-29 17:50:37 0: building GPT model ... 0: [2022-11-29 17:50:37,483] [INFO] [utils.py:827:see_memory_usage] Before Building Model 0: [2022-11-29 17:50:37,484] [INFO] [utils.py:828:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB 0: [2022-11-29 17:50:37,484] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 29.14 GB, percent = 5.8% 0: SEED_LAYERS=False BASE_SEED=1234 SEED_FN=None 0: Using topology: {ProcessCoord(pipe=0, data=0, model=0): 0, ProcessCoord(pipe=0, data=1, model=0): 1, ProcessCoord(pipe=0, data=2, model=0): 2, ProcessCoord(pipe=0, data=3, model=0): 3, ProcessCoord(pipe=0, data=4, model=0): 4, ProcessCoord(pipe=0, data=5, model=0): 5, ProcessCoord(pipe=0, data=6, model=0): 6, ProcessCoord(pipe=0, data=7, model=0): 7, ProcessCoord(pipe=0, data=8, model=0): 8, ProcessCoord(pipe=0, data=9, model=0): 9, ProcessCoord(pipe=0, data=10, model=0): 10, ProcessCoord(pipe=0, data=11, model=0): 11, ProcessCoord(pipe=0, data=12, model=0): 12, ProcessCoord(pipe=0, data=13, model=0): 13, ProcessCoord(pipe=0, data=14, model=0): 14, ProcessCoord(pipe=0, data=15, model=0): 15, ProcessCoord(pipe=0, data=16, model=0): 16, ProcessCoord(pipe=0, data=17, model=0): 17, ProcessCoord(pipe=0, data=18, model=0): 18, ProcessCoord(pipe=0, data=19, model=0): 19, ProcessCoord(pipe=0, data=20, model=0): 20, ProcessCoord(pipe=0, data=21, model=0): 21, ProcessCoord(pipe=0, data=22, model=0): 22, ProcessCoord(pi 0: pe=0, data=23, model=0): 23, ProcessCoord(pipe=0, data=24, model=0): 24, ProcessCoord(pipe=0, data=25, model=0): 25, ProcessCoord(pipe=0, data=26, model=0): 26, ProcessCoord(pipe=0, data=27, model=0): 27, ProcessCoord(pipe=0, data=28, model=0): 28, ProcessCoord(pipe=0, data=29, model=0): 29, ProcessCoord(pipe=0, data=30, model=0): 30, ProcessCoord(pipe=0, data=31, model=0): 31, ProcessCoord(pipe=0, data=32, model=0): 32, ProcessCoord(pipe=0, data=33, model=0): 33, ProcessCoord(pipe=0, data=34, model=0): 34, ProcessCoord(pipe=0, data=35, model=0): 35, ProcessCoord(pipe=0, data=36, model=0): 36, ProcessCoord(pipe=0, data=37, model=0): 37, ProcessCoord(pipe=0, data=38, model=0): 38, ProcessCoord(pipe=0, data=39, model=0): 39, ProcessCoord(pipe=0, data=40, model=0): 40, ProcessCoord(pipe=0, data=41, model=0): 41, ProcessCoord(pipe=0, data=42, model=0): 42, ProcessCoord(pipe=0, data=43, model=0): 43, ProcessCoord(pipe=0, data=44, model=0): 44, ProcessCoord(pipe=0, data=45, model=0): 45, ProcessCoord(pipe=0, data=4 0: 6, model=0): 46, ProcessCoord(pipe=0, data=47, model=0): 47, ProcessCoord(pipe=0, data=48, model=0): 48, ProcessCoord(pipe=0, data=49, model=0): 49, ProcessCoord(pipe=0, data=50, model=0): 50, ProcessCoord(pipe=0, data=51, model=0): 51, ProcessCoord(pipe=0, data=52, model=0): 52, ProcessCoord(pipe=0, data=53, model=0): 53, ProcessCoord(pipe=0, data=54, model=0): 54, ProcessCoord(pipe=0, data=55, model=0): 55, ProcessCoord(pipe=0, data=56, model=0): 56, ProcessCoord(pipe=0, data=57, model=0): 57, ProcessCoord(pipe=0, data=58, model=0): 58, ProcessCoord(pipe=0, data=59, model=0): 59, ProcessCoord(pipe=0, data=60, model=0): 60, ProcessCoord(pipe=0, data=61, model=0): 61, ProcessCoord(pipe=0, data=62, model=0): 62, ProcessCoord(pipe=0, data=63, model=0): 63} 0: [2022-11-29 17:50:39,575] [INFO] [module.py:366:_partition_layers] Partitioning pipeline stages with method type:transformer 0: stage=0 layers=33 0: 0: _to_float16 0: 1: EmbeddingPipe 0: 2: 0: 3: ParallelTransformerLayerPipe 0: 4: ParallelTransformerLayerPipe 0: 5: ParallelTransformerLayerPipe 0: 6: ParallelTransformerLayerPipe 0: 7: ParallelTransformerLayerPipe 0: 8: ParallelTransformerLayerPipe 0: 9: ParallelTransformerLayerPipe 0: 10: ParallelTransformerLayerPipe 0: 11: ParallelTransformerLayerPipe 0: 12: ParallelTransformerLayerPipe 0: 13: ParallelTransformerLayerPipe 0: 14: ParallelTransformerLayerPipe 0: 15: ParallelTransformerLayerPipe 0: 16: ParallelTransformerLayerPipe 0: 17: ParallelTransformerLayerPipe 0: 18: ParallelTransformerLayerPipe 0: 19: ParallelTransformerLayerPipe 0: 20: ParallelTransformerLayerPipe 0: 21: ParallelTransformerLayerPipe 0: 22: ParallelTransformerLayerPipe 0: 23: ParallelTransformerLayerPipe 0: 24: ParallelTransformerLayerPipe 0: 25: ParallelTransformerLayerPipe 0: 26: ParallelTransformerLayerPipe 0: 27: ParallelTransformerLayerPipe 0: 28: ParallelTransformerLayerPipe 0: 29: undo 0: 30: MixedFusedLayerNorm 0: 31: EmbeddingPipe 0: 32: float16_to_fp32 0: loss: CrossEntropy 0: [2022-11-29 17:50:40,132] [INFO] [utils.py:827:see_memory_usage] After Building Model 0: [2022-11-29 17:50:40,132] [INFO] [utils.py:828:see_memory_usage] MA 2.05 GB Max_MA 2.05 GB CA 2.19 GB Max_CA 2 GB 0: [2022-11-29 17:50:40,132] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 29.19 GB, percent = 5.8% 0: setting training iterations to 2891 0: > learning rate decay style: cosine 0: DeepSpeed is enabled. 0: [2022-11-29 17:50:40,134] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.7.5, git-hash=unknown, git-branch=unknown 0: [2022-11-29 17:50:53,232] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False 0: [2022-11-29 17:50:53,232] [INFO] [logging.py:68:log_dist] [Rank 0] Removing param_group that has no 'params' in the client Optimizer 0: [2022-11-29 17:50:53,232] [INFO] [logging.py:68:log_dist] [Rank 0] Using client Optimizer as basic optimizer 0: [2022-11-29 17:50:53,244] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Basic Optimizer = FusedAdam 0: [2022-11-29 17:50:53,244] [INFO] [logging.py:68:log_dist] [Rank 0] Creating BF16 optimizer 0: [2022-11-29 17:50:53,287] [INFO] [utils.py:827:see_memory_usage] begin bf16_optimizer 0: [2022-11-29 17:50:53,288] [INFO] [utils.py:828:see_memory_usage] MA 2.04 GB Max_MA 2.06 GB CA 2.19 GB Max_CA 2 GB 0: [2022-11-29 17:50:53,288] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 29.88 GB, percent = 5.9% 2: ninja: no work to do. 2: Time to load utils op: 0.23566126823425293 seconds 3: ninja: no work to do. 3: Time to load utils op: 0.17323756217956543 seconds 0: Time to load utils op: 0.33142590522766113 seconds 0: Time to load utils op: 0.3027992248535156 seconds 0: Time to load utils op: 0.30228495597839355 seconds 0: Time to load utils op: 0.3031189441680908 seconds 0: Time to load utils op: 0.30384206771850586 seconds 0: Time to load utils op: 0.3040444850921631 seconds 0: Time to load utils op: 0.3023831844329834 seconds 0: Time to load utils op: 0.30482029914855957 seconds 2: Time to load utils op: 0.304034948348999 seconds 2: Time to load utils op: 0.3036925792694092 secondsTime to load utils op: 0.3044910430908203 seconds 2: 2: Time to load utils op: 0.3046729564666748 seconds 2: Time to load utils op: 0.20252513885498047 seconds 2: Time to load utils op: 0.20206379890441895 seconds 2: Time to load utils op: 0.20276308059692383 seconds 3: Time to load utils op: 0.20595955848693848 seconds 3: Time to load utils op: 0.20621442794799805 seconds 3: Time to load utils op: 0.20627641677856445 seconds 3: Time to load utils op: 0.20633983612060547 seconds 3: Time to load utils op: 0.2063581943511963 seconds 3: Time to load utils op: 0.20648479461669922 seconds 3: Time to load utils op: 0.20666980743408203 seconds 4: Time to load utils op: 0.22125029563903809 secondsTime to load utils op: 0.22104287147521973 seconds 4: 4: Time to load utils op: 0.2197115421295166 seconds 4: Time to load utils op: 0.22103190422058105 secondsTime to load utils op: 0.22060585021972656 secondsTime to load utils op: 0.2202603816986084 seconds 4: Time to load utils op: 0.2210216522216797 seconds 4: 4: 4: Time to load utils op: 0.2212374210357666 seconds 1: Time to load utils op: 0.324230432510376 seconds 1: Time to load utils op: 0.3242325782775879 secondsTime to load utils op: 0.3242371082305908 seconds 1: 1: Time to load utils op: 0.32424139976501465 seconds 1: Time to load utils op: 0.3241105079650879 seconds 1: Time to load utils op: 0.32425761222839355 seconds 1: Time to load utils op: 0.32433271408081055 seconds 1: Time to load utils op: 0.32428956031799316 seconds 0: [2022-11-29 17:50:53,626] [INFO] [utils.py:827:see_memory_usage] before initializing group 0 0: [2022-11-29 17:50:53,626] [INFO] [utils.py:828:see_memory_usage] MA 2.04 GB Max_MA 2.04 GB CA 2.19 GB Max_CA 2 GB 0: [2022-11-29 17:50:53,626] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 29.88 GB, percent = 5.9% 5: Time to load utils op: 0.21216583251953125 seconds 5: Time to load utils op: 0.21256661415100098 seconds 5: Time to load utils op: 0.2125706672668457 secondsTime to load utils op: 0.2125694751739502 seconds 5: 5: Time to load utils op: 0.21259331703186035 secondsTime to load utils op: 0.21260380744934082 seconds 5: Time to load utils op: 0.21259689331054688 seconds 5: 5: Time to load utils op: 0.21260643005371094 seconds 6: Time to load utils op: 0.21385622024536133 seconds 6: Time to load utils op: 0.21386194229125977 seconds 6: Time to load utils op: 0.213881254196167 seconds 6: Time to load utils op: 0.21387791633605957 seconds 6: Time to load utils op: 0.21389269828796387 secondsTime to load utils op: 0.21390056610107422 secondsTime to load utils op: 0.21390032768249512 seconds 6: 6: 6: Time to load utils op: 0.21390700340270996 seconds 7: Time to load utils op: 0.21324944496154785 seconds 7: Time to load utils op: 0.2132704257965088 secondsTime to load utils op: 0.21326994895935059 seconds 7: 7: Time to load utils op: 0.21330785751342773 secondsTime to load utils op: 0.21331262588500977 seconds 7: 7: Time to load utils op: 0.21332073211669922 secondsTime to load utils op: 0.2133185863494873 seconds 7: 7: Time to load utils op: 0.21333527565002441 seconds 0: [2022-11-29 17:50:53,967] [INFO] [utils.py:827:see_memory_usage] after initializing group 0 0: [2022-11-29 17:50:53,967] [INFO] [utils.py:828:see_memory_usage] MA 4.24 GB Max_MA 4.24 GB CA 5.44 GB Max_CA 5 GB 0: [2022-11-29 17:50:53,967] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 29.89 GB, percent = 5.9% 2: Time to load utils op: 0.0005064010620117188 seconds 2: Time to load utils op: 0.0004057884216308594 seconds 2: Time to load utils op: 0.0003991127014160156 seconds 0: Time to load utils op: 0.0005359649658203125 seconds 0: Time to load utils op: 0.00045180320739746094 seconds 0: Time to load utils op: 0.00042748451232910156 seconds 2: Time to load utils op: 0.0005943775177001953 seconds 0: Time to load utils op: 0.0004229545593261719 seconds 2: Time to load utils op: 0.0006234645843505859 seconds 0: Time to load utils op: 0.00042247772216796875 seconds 2: Time to load utils op: 0.0006449222564697266 seconds 0: Time to load utils op: 0.0006496906280517578 seconds 0: Time to load utils op: 0.0006239414215087891 seconds 2: Time to load utils op: 0.0006906986236572266 seconds 2: Time to load utils op: 0.000476837158203125 seconds 4: Time to load utils op: 0.0007371902465820312 seconds 6: Time to load utils op: 0.0013804435729980469 seconds 4: Time to load utils op: 0.0009975433349609375 secondsTime to load utils op: 0.001041412353515625 seconds 4: 4: Time to load utils op: 0.0011353492736816406 seconds 4: Time to load utils op: 0.0011258125305175781 seconds 4: Time to load utils op: 0.0011293888092041016 seconds 4: Time to load utils op: 0.0011546611785888672 seconds 4: Time to load utils op: 0.0011615753173828125 seconds 6: Time to load utils op: 0.001924753189086914 seconds 5: Time to load utils op: 0.0009436607360839844 seconds 6: Time to load utils op: 0.0020017623901367188 seconds 6: Time to load utils op: 0.0019345283508300781 seconds 6: Time to load utils op: 0.0019729137420654297 secondsTime to load utils op: 0.001962900161743164 seconds 6: 6: Time to load utils op: 0.001992464065551758 seconds 6: Time to load utils op: 0.0020034313201904297 seconds 5: Time to load utils op: 0.001226186752319336 seconds 5: Time to load utils op: 0.0015766620635986328 seconds 1: Time to load utils op: 0.0007855892181396484 seconds 5: Time to load utils op: 0.0016503334045410156 seconds 5: Time to load utils op: 0.001684427261352539 seconds 5: Time to load utils op: 0.00164794921875 secondsTime to load utils op: 0.0016486644744873047 seconds 5: 5: Time to load utils op: 0.0016918182373046875 seconds 1: Time to load utils op: 0.0013966560363769531 seconds 1: Time to load utils op: 0.0019440650939941406 seconds 1: Time to load utils op: 0.0019609928131103516 seconds 1: Time to load utils op: 0.0019588470458984375 seconds 1: Time to load utils op: 0.0019412040710449219 seconds 1: Time to load utils op: 0.0019609928131103516 seconds 1: Time to load utils op: 0.0020444393157958984 seconds 0: [2022-11-29 17:50:53,999] [INFO] [utils.py:827:see_memory_usage] before initializing group 1 0: [2022-11-29 17:50:54,000] [INFO] [utils.py:828:see_memory_usage] MA 4.24 GB Max_MA 4.24 GB CA 5.44 GB Max_CA 5 GB 0: [2022-11-29 17:50:54,000] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 29.9 GB, percent = 5.9% 3: Time to load utils op: 0.0009582042694091797 seconds 3: Time to load utils op: 0.0009634494781494141 seconds 3: Time to load utils op: 0.0013456344604492188 seconds 3: Time to load utils op: 0.0013494491577148438 seconds 3: Time to load utils op: 0.0012884140014648438 secondsTime to load utils op: 0.0014028549194335938 seconds 3: 3: Time to load utils op: 0.0010943412780761719 seconds 3: Time to load utils op: 0.0013670921325683594 seconds 7: Time to load utils op: 0.0012385845184326172 seconds 7: Time to load utils op: 0.002031087875366211 seconds 7: Time to load utils op: 0.0020279884338378906 seconds 7: Time to load utils op: 0.0020189285278320312 secondsTime to load utils op: 0.0020744800567626953 seconds 7: 7: Time to load utils op: 0.00201416015625 seconds 7: Time to load utils op: 0.002043008804321289 seconds 7: Time to load utils op: 0.002056121826171875 seconds 0: [2022-11-29 17:50:54,048] [INFO] [utils.py:827:see_memory_usage] after initializing group 1 0: [2022-11-29 17:50:54,049] [INFO] [utils.py:828:see_memory_usage] MA 6.19 GB Max_MA 6.19 GB CA 8.31 GB Max_CA 8 GB 0: [2022-11-29 17:50:54,049] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 30.03 GB, percent = 6.0% 0: [2022-11-29 17:50:54,081] [INFO] [utils.py:827:see_memory_usage] before initializing group 2 0: [2022-11-29 17:50:54,081] [INFO] [utils.py:828:see_memory_usage] MA 6.19 GB Max_MA 6.19 GB CA 8.31 GB Max_CA 8 GB 0: [2022-11-29 17:50:54,082] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 30.03 GB, percent = 6.0% 0: [2022-11-29 17:50:54,116] [INFO] [utils.py:827:see_memory_usage] after initializing group 2 0: [2022-11-29 17:50:54,116] [INFO] [utils.py:828:see_memory_usage] MA 6.19 GB Max_MA 6.19 GB CA 8.31 GB Max_CA 8 GB 0: [2022-11-29 17:50:54,116] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 30.03 GB, percent = 6.0% 0: [2022-11-29 17:50:54,147] [INFO] [utils.py:827:see_memory_usage] before initialize_optimizer 0: [2022-11-29 17:50:54,147] [INFO] [utils.py:828:see_memory_usage] MA 6.19 GB Max_MA 6.19 GB CA 8.31 GB Max_CA 8 GB 0: [2022-11-29 17:50:54,148] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 30.03 GB, percent = 6.0% 0: [2022-11-29 17:50:54,184] [INFO] [utils.py:827:see_memory_usage] end initialize_optimizer 0: [2022-11-29 17:50:54,185] [INFO] [utils.py:828:see_memory_usage] MA 6.32 GB Max_MA 6.32 GB CA 8.34 GB Max_CA 8 GB 0: [2022-11-29 17:50:54,185] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 30.03 GB, percent = 6.0% 0: [2022-11-29 17:50:54,217] [INFO] [utils.py:827:see_memory_usage] end bf16_optimizer 0: [2022-11-29 17:50:54,217] [INFO] [utils.py:828:see_memory_usage] MA 6.32 GB Max_MA 6.32 GB CA 8.34 GB Max_CA 8 GB 0: [2022-11-29 17:50:54,217] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 30.03 GB, percent = 6.0% 0: [2022-11-29 17:50:54,217] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Final Optimizer = FusedAdam 0: [2022-11-29 17:50:54,217] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed using client LR scheduler 0: [2022-11-29 17:50:54,218] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed LR Scheduler = 0: [2022-11-29 17:50:54,218] [INFO] [logging.py:68:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0, 0.0, 0.0], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 0: [2022-11-29 17:50:54,218] [INFO] [config.py:1007:print] DeepSpeedEngine configuration: 0: [2022-11-29 17:50:54,218] [INFO] [config.py:1011:print] activation_checkpointing_config { 0: "partition_activations": false, 0: "contiguous_memory_optimization": false, 0: "cpu_checkpointing": false, 0: "number_checkpoints": null, 0: "synchronize_checkpoint_boundary": false, 0: "profile": false 0: } 0: [2022-11-29 17:50:54,218] [INFO] [config.py:1011:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True} 0: [2022-11-29 17:50:54,218] [INFO] [config.py:1011:print] amp_enabled .................. False 0: [2022-11-29 17:50:54,218] [INFO] [config.py:1011:print] amp_params ................... False 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] autotuning_config ............ { 0: "enabled": false, 0: "start_step": null, 0: "end_step": null, 0: "metric_path": null, 0: "arg_mappings": null, 0: "metric": "throughput", 0: "model_info": null, 0: "results_dir": "/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/autotuning_results", 0: "exps_dir": "/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/autotuning_exps", 0: "overwrite": true, 0: "fast": true, 0: "start_profile_step": 3, 0: "end_profile_step": 5, 0: "tuner_type": "gridsearch", 0: "tuner_early_stopping": 5, 0: "tuner_num_trials": 50, 0: "model_info_path": null, 0: "mp_size": 1, 0: "max_train_batch_size": null, 0: "min_train_batch_size": 1, 0: "max_train_micro_batch_size_per_gpu": 1.024000e+03, 0: "min_train_micro_batch_size_per_gpu": 1, 0: "num_tuning_micro_batch_sizes": 3 0: } 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] bfloat16_enabled ............. True 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] checkpoint_parallel_write_pipeline False 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] checkpoint_tag_validation_enabled True 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] checkpoint_tag_validation_fail False 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] comms_config ................. 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] communication_data_type ...... None 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_pa 0: rameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}} 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] curriculum_enabled ........... False 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] curriculum_params ............ False 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] dataloader_drop_last ......... False 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] disable_allgather ............ False 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] dump_state ................... False 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] dynamic_loss_scale_args ...... None 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] eigenvalue_enabled ........... False 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] eigenvalue_gas_boundary_resolution 1 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] eigenvalue_layer_name ........ bert.encoder.layer 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] eigenvalue_layer_num ......... 0 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] eigenvalue_max_iter .......... 100 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] eigenvalue_stability ......... 1e-06 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] eigenvalue_tol ............... 0.01 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] eigenvalue_verbose ........... False 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] elasticity_enabled ........... False 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] flops_profiler_config ........ { 0: "enabled": false, 0: "profile_step": 1, 0: "module_depth": -1, 0: "top_modules": 1, 0: "detailed": true, 0: "output_file": null 0: } 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] fp16_auto_cast ............... None 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] fp16_enabled ................. False 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] fp16_master_weights_and_gradients False 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] global_rank .................. 0 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] gradient_accumulation_steps .. 1 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] gradient_clipping ............ 1.0 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] gradient_predivide_factor .... 1.0 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] initial_dynamic_scale ........ 1 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] load_universal_checkpoint .... False 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] loss_scale ................... 1.0 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] memory_breakdown ............. False 0: [2022-11-29 17:50:54,219] [INFO] [config.py:1011:print] monitor_config ............... 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] nebula_config ................ { 0: "enabled": false, 0: "persistent_storage_path": null, 0: "persistent_time_interval": 100, 0: "num_of_version_in_retention": 2, 0: "enable_nebula_load": true, 0: "load_path": null 0: } 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] optimizer_legacy_fusion ...... False 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] optimizer_name ............... None 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] optimizer_params ............. None 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0} 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] pld_enabled .................. False 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] pld_params ................... False 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] prescale_gradients ........... False 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] scheduler_name ............... None 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] scheduler_params ............. None 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] sparse_attention ............. None 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] sparse_gradients_enabled ..... False 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] steps_per_print .............. 2000 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] train_batch_size ............. 256 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] train_micro_batch_size_per_gpu 4 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] use_node_local_storage ....... False 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] wall_clock_breakdown ......... False 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] world_size ................... 64 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] zero_allow_untested_optimizer False 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] zero_config .................. stage=0 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=500000000 allgather_partitions=True allgather_bucket_size=500000000 overlap_comm=False load_from_fp32_weights=True elastic_checkpoint=False offload_param=None offload_optimizer=None sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50000000 param_persistence_threshold=100000 model_persistence_threshold=9223372036854775807 max_live_parameters=1000000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] zero_enabled ................. False 0: [2022-11-29 17:50:54,220] [INFO] [config.py:1011:print] zero_optimization_stage ...... 0 0: [2022-11-29 17:50:54,220] [INFO] [config.py:996:print_user_config] json = { 0: "train_micro_batch_size_per_gpu": 4, 0: "train_batch_size": 256, 0: "gradient_clipping": 1.0, 0: "zero_optimization": { 0: "stage": 0 0: }, 0: "bf16": { 0: "enabled": true 0: }, 0: "steps_per_print": 2.000000e+03, 0: "wall_clock_breakdown": false 0: } 0: Time to load utils op: 0.00041604042053222656 seconds 0: [2022-11-29 17:50:54,221] [INFO] [engine.py:87:__init__] CONFIG: micro_batches=1 micro_batch_size=4 0: [2022-11-29 17:50:54,245] [INFO] [engine.py:145:__init__] RANK=0 STAGE=0 LAYERS=33 [0, 33) STAGE_PARAMS=1096338432 (1096.338M) TOTAL_PARAMS=1096338432 (1096.338M) UNIQUE_PARAMS=1096338432 (1096.338M) 0: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: WARNING: could not find the metadata file checkpoints_1b1oscar 0: will not load any checkpoints and will start from random 6: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2022-11-29 17:50:54,251] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2022-11-29 17:50:54,252] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2022-11-29 17:50:54,252] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1oscar/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: time (ms) | load-checkpoint: 7.04 0: estimated model parameters: 1.096338432 0: estimated model parameters without embeddings: 1.002523648 0: [after model, optimizer, and learning rate scheduler are built] datetime: 2022-11-29 17:50:55 0: > building train, validation, and test datasets ... 0: > datasets target sizes (minimum size): 0: train: 740269 0: validation: 768 0: test: 256 0: > building train, validation, and test datasets for GPT ... 0: > building dataset index ... 0: reading sizes... 0: reading pointers... 0: reading document index... 0: creating numpy buffer of mmap... 0: creating memory view of numpy buffer... 0: > finished creating indexed dataset in 0.025524 seconds 0: number of documents: 431992659 0: > dataset split: 0: train: 0: document indices in [0, 409961033) total of 409961033 documents 0: validation: 0: document indices in [409961033, 431560666) total of 21599633 documents 0: test: 0: document indices in [431560666, 431992659) total of 431993 documents 0: > WARNING: could not find index map files, building the indices on rank 0 ... 0: > only one epoch required, setting separate_last_epoch to False 0: > elasped time to build and save doc-idx mapping (seconds): 30.706916 0: using: 0: number of documents: 409961033 0: number of epochs: 1 0: sequence length: 2048 0: total number of samples: 262636457 0: > elasped time to build and save sample-idx mapping (seconds): 7.716910 0: > building shuffle index with split [0, 262636457) and [262636457, 262636457) ... 0: > elasped time to build and save shuffle-idx mapping (seconds): 15.653235 0: > loading doc-idx mapping from /scratch/project_462000119/data/oscar_megatron/gpt2tok_oscar_text_document_train_indexmap_740269ns_2048sl_1234s_doc_idx.npy 0: > loading sample-idx mapping from /scratch/project_462000119/data/oscar_megatron/gpt2tok_oscar_text_document_train_indexmap_740269ns_2048sl_1234s_sample_idx.npy 0: > loading shuffle-idx mapping from /scratch/project_462000119/data/oscar_megatron/gpt2tok_oscar_text_document_train_indexmap_740269ns_2048sl_1234s_shuffle_idx.npy 0: loaded indexed file in 0.140 seconds 0: total number of samples: 262636458 0: total number of epochs: 1 0: > WARNING: could not find index map files, building the indices on rank 0 ... 0: > only one epoch required, setting separate_last_epoch to False 0: > elasped time to build and save doc-idx mapping (seconds): 1.270410 0: using: 0: number of documents: 21599633 0: number of epochs: 1 0: sequence length: 2048 0: total number of samples: 13852049 0: > elasped time to build and save sample-idx mapping (seconds): 0.415536 0: > building shuffle index with split [0, 13852049) and [13852049, 13852049) ... 0: > elasped time to build and save shuffle-idx mapping (seconds): 0.490091 0: > loading doc-idx mapping from /scratch/project_462000119/data/oscar_megatron/gpt2tok_oscar_text_document_valid_indexmap_768ns_2048sl_1234s_doc_idx.npy 0: > loading sample-idx mapping from /scratch/project_462000119/data/oscar_megatron/gpt2tok_oscar_text_document_valid_indexmap_768ns_2048sl_1234s_sample_idx.npy 0: > loading shuffle-idx mapping from /scratch/project_462000119/data/oscar_megatron/gpt2tok_oscar_text_document_valid_indexmap_768ns_2048sl_1234s_shuffle_idx.npy 0: loaded indexed file in 0.034 seconds 0: total number of samples: 13852050 0: total number of epochs: 1 0: > loading doc-idx mapping from /scratch/project_462000119/data/oscar_megatron/gpt2tok_oscar_text_document_test_indexmap_256ns_2048sl_1234s_doc_idx.npy 0: > loading sample-idx mapping from /scratch/project_462000119/data/oscar_megatron/gpt2tok_oscar_text_document_test_indexmap_256ns_2048sl_1234s_sample_idx.npy 0: > loading shuffle-idx mapping from /scratch/project_462000119/data/oscar_megatron/gpt2tok_oscar_text_document_test_indexmap_256ns_2048sl_1234s_shuffle_idx.npy 0: loaded indexed file in 0.052 seconds 0: total number of samples: 276852 0: total number of epochs: 1 0: > finished creating GPT datasets ... 0: [after dataloaders are built] datetime: 2022-11-29 17:52:22 0: done with setup ... 0: training ... 0: Number of parameters: [tensor rank - pipeline rank] w/ and w/o embeddings: 7: time (ms) | model-and-optimizer-setup: 17490.59 | train/valid/test-data-iterators-setup: 87650.14 0: [000-000] 1.0963B / 1.0025B 0: [before the start of training step] datetime: 2022-11-29 17:52:23 0: [Rank 0] (after 10 iterations) memory (MB) | allocated: 10138.55712890625 | max allocated: 54070.994140625 | reserved: 55702.0 | max reserved: 55702.0 7: iteration 10/ 2891 | consumed samples: 2560 | consumed tokens: 5242880 | elapsed time per iteration (s): 3.63 | learning rate: 6.916E-05 | global batch size: 256 | lm loss: 1.004691E+01 | grad norm: 3.924 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 70.600 | TFLOPs: 17.08 | 7: iteration 20/ 2891 | consumed samples: 5120 | consumed tokens: 10485760 | elapsed time per iteration (s): 1.29 | learning rate: 1.383E-04 | global batch size: 256 | lm loss: 8.535011E+00 | grad norm: 1.547 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.490 | TFLOPs: 48.03 | 7: iteration 30/ 2891 | consumed samples: 7680 | consumed tokens: 15728640 | elapsed time per iteration (s): 1.30 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 7.889861E+00 | grad norm: 1.003 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.089 | TFLOPs: 47.69 | 7: iteration 40/ 2891 | consumed samples: 10240 | consumed tokens: 20971520 | elapsed time per iteration (s): 1.26 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 7.719034E+00 | grad norm: 0.750 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.856 | TFLOPs: 49.09 | 7: iteration 50/ 2891 | consumed samples: 12800 | consumed tokens: 26214400 | elapsed time per iteration (s): 1.28 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 7.486797E+00 | grad norm: 1.875 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.565 | TFLOPs: 48.29 | 7: iteration 60/ 2891 | consumed samples: 15360 | consumed tokens: 31457280 | elapsed time per iteration (s): 1.27 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 7.247771E+00 | grad norm: 1.151 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.289 | TFLOPs: 48.71 | 7: iteration 70/ 2891 | consumed samples: 17920 | consumed tokens: 36700160 | elapsed time per iteration (s): 1.27 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 7.072065E+00 | grad norm: 0.787 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.287 | TFLOPs: 48.95 | 7: iteration 80/ 2891 | consumed samples: 20480 | consumed tokens: 41943040 | elapsed time per iteration (s): 1.29 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 6.970959E+00 | grad norm: 0.608 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.443 | TFLOPs: 48.02 | 7: iteration 90/ 2891 | consumed samples: 23040 | consumed tokens: 47185920 | elapsed time per iteration (s): 1.29 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 6.801189E+00 | grad norm: 0.621 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.436 | TFLOPs: 48.02 | 7: iteration 100/ 2891 | consumed samples: 25600 | consumed tokens: 52428800 | elapsed time per iteration (s): 1.28 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 6.721026E+00 | grad norm: 0.568 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.761 | TFLOPs: 48.58 | 7: iteration 110/ 2891 | consumed samples: 28160 | consumed tokens: 57671680 | elapsed time per iteration (s): 1.27 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 6.648073E+00 | grad norm: 0.931 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.686 | TFLOPs: 48.81 | 7: iteration 120/ 2891 | consumed samples: 30720 | consumed tokens: 62914560 | elapsed time per iteration (s): 1.28 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 6.601103E+00 | grad norm: 0.663 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.196 | TFLOPs: 48.45 | 7: iteration 130/ 2891 | consumed samples: 33280 | consumed tokens: 68157440 | elapsed time per iteration (s): 1.28 | learning rate: 1.994E-04 | global batch size: 256 | lm loss: 6.506982E+00 | grad norm: 1.053 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.931 | TFLOPs: 48.38 | 7: iteration 140/ 2891 | consumed samples: 35840 | consumed tokens: 73400320 | elapsed time per iteration (s): 1.27 | learning rate: 1.993E-04 | global batch size: 256 | lm loss: 6.472588E+00 | grad norm: 0.751 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.956 | TFLOPs: 48.63 | 7: iteration 150/ 2891 | consumed samples: 38400 | consumed tokens: 78643200 | elapsed time per iteration (s): 1.29 | learning rate: 1.992E-04 | global batch size: 256 | lm loss: 6.405199E+00 | grad norm: 0.740 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.396 | TFLOPs: 48.01 | 7: iteration 160/ 2891 | consumed samples: 40960 | consumed tokens: 83886080 | elapsed time per iteration (s): 1.28 | learning rate: 1.991E-04 | global batch size: 256 | lm loss: 6.387122E+00 | grad norm: 0.658 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.733 | TFLOPs: 48.33 | 7: iteration 170/ 2891 | consumed samples: 43520 | consumed tokens: 89128960 | elapsed time per iteration (s): 1.27 | learning rate: 1.989E-04 | global batch size: 256 | lm loss: 6.346349E+00 | grad norm: 0.666 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.682 | TFLOPs: 48.80 | 7: iteration 180/ 2891 | consumed samples: 46080 | consumed tokens: 94371840 | elapsed time per iteration (s): 1.27 | learning rate: 1.988E-04 | global batch size: 256 | lm loss: 6.302601E+00 | grad norm: 1.106 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.972 | TFLOPs: 48.63 | 7: iteration 190/ 2891 | consumed samples: 48640 | consumed tokens: 99614720 | elapsed time per iteration (s): 1.27 | learning rate: 1.986E-04 | global batch size: 256 | lm loss: 6.280320E+00 | grad norm: 0.520 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.269 | TFLOPs: 48.71 | 7: iteration 200/ 2891 | consumed samples: 51200 | consumed tokens: 104857600 | elapsed time per iteration (s): 1.26 | learning rate: 1.984E-04 | global batch size: 256 | lm loss: 6.255330E+00 | grad norm: 0.813 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.686 | TFLOPs: 49.05 | 7: iteration 210/ 2891 | consumed samples: 53760 | consumed tokens: 110100480 | elapsed time per iteration (s): 1.26 | learning rate: 1.982E-04 | global batch size: 256 | lm loss: 6.184408E+00 | grad norm: 0.531 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.545 | TFLOPs: 49.01 | 7: iteration 220/ 2891 | consumed samples: 56320 | consumed tokens: 115343360 | elapsed time per iteration (s): 1.27 | learning rate: 1.980E-04 | global batch size: 256 | lm loss: 6.182448E+00 | grad norm: 0.750 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.483 | TFLOPs: 48.76 | 7: iteration 230/ 2891 | consumed samples: 58880 | consumed tokens: 120586240 | elapsed time per iteration (s): 1.30 | learning rate: 1.978E-04 | global batch size: 256 | lm loss: 6.089656E+00 | grad norm: 0.490 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.162 | TFLOPs: 47.71 | 7: iteration 240/ 2891 | consumed samples: 61440 | consumed tokens: 125829120 | elapsed time per iteration (s): 1.29 | learning rate: 1.976E-04 | global batch size: 256 | lm loss: 6.059994E+00 | grad norm: 0.728 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.183 | TFLOPs: 47.96 | 7: iteration 250/ 2891 | consumed samples: 64000 | consumed tokens: 131072000 | elapsed time per iteration (s): 1.29 | learning rate: 1.974E-04 | global batch size: 256 | lm loss: 6.052314E+00 | grad norm: 0.447 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.947 | TFLOPs: 48.14 | 7: iteration 260/ 2891 | consumed samples: 66560 | consumed tokens: 136314880 | elapsed time per iteration (s): 1.28 | learning rate: 1.971E-04 | global batch size: 256 | lm loss: 6.016652E+00 | grad norm: 0.867 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.301 | TFLOPs: 48.47 | 7: iteration 270/ 2891 | consumed samples: 69120 | consumed tokens: 141557760 | elapsed time per iteration (s): 1.28 | learning rate: 1.969E-04 | global batch size: 256 | lm loss: 5.984499E+00 | grad norm: 0.427 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.669 | TFLOPs: 48.32 | 7: iteration 280/ 2891 | consumed samples: 71680 | consumed tokens: 146800640 | elapsed time per iteration (s): 1.30 | learning rate: 1.966E-04 | global batch size: 256 | lm loss: 5.945768E+00 | grad norm: 0.547 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 196.959 | TFLOPs: 47.66 | 7: iteration 290/ 2891 | consumed samples: 74240 | consumed tokens: 152043520 | elapsed time per iteration (s): 1.27 | learning rate: 1.963E-04 | global batch size: 256 | lm loss: 5.918950E+00 | grad norm: 0.623 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.468 | TFLOPs: 48.75 | 7: iteration 300/ 2891 | consumed samples: 76800 | consumed tokens: 157286400 | elapsed time per iteration (s): 1.29 | learning rate: 1.960E-04 | global batch size: 256 | lm loss: 5.851524E+00 | grad norm: 0.398 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.607 | TFLOPs: 48.06 | 7: iteration 310/ 2891 | consumed samples: 79360 | consumed tokens: 162529280 | elapsed time per iteration (s): 1.30 | learning rate: 1.958E-04 | global batch size: 256 | lm loss: 5.849449E+00 | grad norm: 0.728 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.232 | TFLOPs: 47.73 | 7: iteration 320/ 2891 | consumed samples: 81920 | consumed tokens: 167772160 | elapsed time per iteration (s): 1.28 | learning rate: 1.954E-04 | global batch size: 256 | lm loss: 5.822430E+00 | grad norm: 0.339 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.730 | TFLOPs: 48.33 | 7: iteration 330/ 2891 | consumed samples: 84480 | consumed tokens: 173015040 | elapsed time per iteration (s): 1.28 | learning rate: 1.951E-04 | global batch size: 256 | lm loss: 5.772898E+00 | grad norm: 0.325 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.186 | TFLOPs: 48.44 | 7: iteration 340/ 2891 | consumed samples: 87040 | consumed tokens: 178257920 | elapsed time per iteration (s): 1.27 | learning rate: 1.948E-04 | global batch size: 256 | lm loss: 5.747245E+00 | grad norm: 0.774 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.560 | TFLOPs: 48.78 | 7: iteration 350/ 2891 | consumed samples: 89600 | consumed tokens: 183500800 | elapsed time per iteration (s): 1.29 | learning rate: 1.945E-04 | global batch size: 256 | lm loss: 5.703384E+00 | grad norm: 0.358 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.154 | TFLOPs: 47.95 | 7: iteration 360/ 2891 | consumed samples: 92160 | consumed tokens: 188743680 | elapsed time per iteration (s): 1.27 | learning rate: 1.941E-04 | global batch size: 256 | lm loss: 5.666056E+00 | grad norm: 0.374 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.833 | TFLOPs: 48.60 | 7: iteration 370/ 2891 | consumed samples: 94720 | consumed tokens: 193986560 | elapsed time per iteration (s): 1.28 | learning rate: 1.938E-04 | global batch size: 256 | lm loss: 5.644558E+00 | grad norm: 0.499 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.497 | TFLOPs: 48.52 | 7: iteration 380/ 2891 | consumed samples: 97280 | consumed tokens: 199229440 | elapsed time per iteration (s): 1.28 | learning rate: 1.934E-04 | global batch size: 256 | lm loss: 5.657554E+00 | grad norm: 0.964 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.193 | TFLOPs: 48.44 | 7: iteration 390/ 2891 | consumed samples: 99840 | consumed tokens: 204472320 | elapsed time per iteration (s): 1.27 | learning rate: 1.930E-04 | global batch size: 256 | lm loss: 5.669816E+00 | grad norm: 0.386 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.133 | TFLOPs: 48.91 | 7: iteration 400/ 2891 | consumed samples: 102400 | consumed tokens: 209715200 | elapsed time per iteration (s): 1.42 | learning rate: 1.926E-04 | global batch size: 256 | lm loss: 5.564518E+00 | grad norm: 0.356 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 180.908 | TFLOPs: 43.78 | 7: iteration 410/ 2891 | consumed samples: 104960 | consumed tokens: 214958080 | elapsed time per iteration (s): 1.27 | learning rate: 1.922E-04 | global batch size: 256 | lm loss: 5.515744E+00 | grad norm: 0.437 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.293 | TFLOPs: 48.95 | 7: iteration 420/ 2891 | consumed samples: 107520 | consumed tokens: 220200960 | elapsed time per iteration (s): 1.29 | learning rate: 1.918E-04 | global batch size: 256 | lm loss: 5.492318E+00 | grad norm: 0.625 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.660 | TFLOPs: 48.07 | 7: iteration 430/ 2891 | consumed samples: 110080 | consumed tokens: 225443840 | elapsed time per iteration (s): 1.28 | learning rate: 1.914E-04 | global batch size: 256 | lm loss: 5.468146E+00 | grad norm: 0.585 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.491 | TFLOPs: 48.27 | 7: iteration 440/ 2891 | consumed samples: 112640 | consumed tokens: 230686720 | elapsed time per iteration (s): 1.27 | learning rate: 1.910E-04 | global batch size: 256 | lm loss: 5.424212E+00 | grad norm: 0.363 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.751 | TFLOPs: 48.82 | 7: iteration 450/ 2891 | consumed samples: 115200 | consumed tokens: 235929600 | elapsed time per iteration (s): 1.27 | learning rate: 1.906E-04 | global batch size: 256 | lm loss: 5.390857E+00 | grad norm: 0.693 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.618 | TFLOPs: 48.79 | 7: iteration 460/ 2891 | consumed samples: 117760 | consumed tokens: 241172480 | elapsed time per iteration (s): 1.28 | learning rate: 1.901E-04 | global batch size: 256 | lm loss: 5.402158E+00 | grad norm: 0.520 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.613 | TFLOPs: 48.55 | 7: iteration 470/ 2891 | consumed samples: 120320 | consumed tokens: 246415360 | elapsed time per iteration (s): 1.27 | learning rate: 1.897E-04 | global batch size: 256 | lm loss: 5.347914E+00 | grad norm: 0.519 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.077 | TFLOPs: 48.90 | 7: iteration 480/ 2891 | consumed samples: 122880 | consumed tokens: 251658240 | elapsed time per iteration (s): 1.27 | learning rate: 1.892E-04 | global batch size: 256 | lm loss: 5.333340E+00 | grad norm: 0.484 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.166 | TFLOPs: 48.68 | 7: iteration 490/ 2891 | consumed samples: 125440 | consumed tokens: 256901120 | elapsed time per iteration (s): 1.27 | learning rate: 1.887E-04 | global batch size: 256 | lm loss: 5.303981E+00 | grad norm: 0.527 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.337 | TFLOPs: 48.96 | 7: iteration 500/ 2891 | consumed samples: 128000 | consumed tokens: 262144000 | elapsed time per iteration (s): 1.27 | learning rate: 1.882E-04 | global batch size: 256 | lm loss: 5.271611E+00 | grad norm: 0.720 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.182 | TFLOPs: 48.93 | 7: iteration 510/ 2891 | consumed samples: 130560 | consumed tokens: 267386880 | elapsed time per iteration (s): 1.31 | learning rate: 1.877E-04 | global batch size: 256 | lm loss: 5.297839E+00 | grad norm: 0.398 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 195.783 | TFLOPs: 47.38 | 7: iteration 520/ 2891 | consumed samples: 133120 | consumed tokens: 272629760 | elapsed time per iteration (s): 1.27 | learning rate: 1.872E-04 | global batch size: 256 | lm loss: 5.216937E+00 | grad norm: 0.452 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.048 | TFLOPs: 48.65 | 7: iteration 530/ 2891 | consumed samples: 135680 | consumed tokens: 277872640 | elapsed time per iteration (s): 1.27 | learning rate: 1.867E-04 | global batch size: 256 | lm loss: 5.188192E+00 | grad norm: 0.632 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.990 | TFLOPs: 48.88 | 7: iteration 540/ 2891 | consumed samples: 138240 | consumed tokens: 283115520 | elapsed time per iteration (s): 1.32 | learning rate: 1.862E-04 | global batch size: 256 | lm loss: 5.187362E+00 | grad norm: 0.437 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 194.294 | TFLOPs: 47.02 | 7: iteration 550/ 2891 | consumed samples: 140800 | consumed tokens: 288358400 | elapsed time per iteration (s): 1.31 | learning rate: 1.857E-04 | global batch size: 256 | lm loss: 5.129262E+00 | grad norm: 0.618 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 194.957 | TFLOPs: 47.18 | 7: iteration 560/ 2891 | consumed samples: 143360 | consumed tokens: 293601280 | elapsed time per iteration (s): 1.26 | learning rate: 1.851E-04 | global batch size: 256 | lm loss: 5.110377E+00 | grad norm: 0.485 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.604 | TFLOPs: 49.03 | 7: iteration 570/ 2891 | consumed samples: 145920 | consumed tokens: 298844160 | elapsed time per iteration (s): 1.29 | learning rate: 1.846E-04 | global batch size: 256 | lm loss: 5.050526E+00 | grad norm: 0.438 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.576 | TFLOPs: 48.05 | 7: iteration 580/ 2891 | consumed samples: 148480 | consumed tokens: 304087040 | elapsed time per iteration (s): 1.30 | learning rate: 1.840E-04 | global batch size: 256 | lm loss: 5.020714E+00 | grad norm: 0.458 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.002 | TFLOPs: 47.67 | 7: iteration 590/ 2891 | consumed samples: 151040 | consumed tokens: 309329920 | elapsed time per iteration (s): 1.27 | learning rate: 1.835E-04 | global batch size: 256 | lm loss: 5.024595E+00 | grad norm: 0.458 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.828 | TFLOPs: 48.84 | 7: iteration 600/ 2891 | consumed samples: 153600 | consumed tokens: 314572800 | elapsed time per iteration (s): 1.27 | learning rate: 1.829E-04 | global batch size: 256 | lm loss: 4.962969E+00 | grad norm: 0.521 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.731 | TFLOPs: 48.82 | 7: iteration 610/ 2891 | consumed samples: 156160 | consumed tokens: 319815680 | elapsed time per iteration (s): 1.28 | learning rate: 1.823E-04 | global batch size: 256 | lm loss: 4.929038E+00 | grad norm: 0.489 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.929 | TFLOPs: 48.38 | 7: iteration 620/ 2891 | consumed samples: 158720 | consumed tokens: 325058560 | elapsed time per iteration (s): 1.28 | learning rate: 1.817E-04 | global batch size: 256 | lm loss: 4.886727E+00 | grad norm: 0.586 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.627 | TFLOPs: 48.31 | 7: iteration 630/ 2891 | consumed samples: 161280 | consumed tokens: 330301440 | elapsed time per iteration (s): 1.30 | learning rate: 1.811E-04 | global batch size: 256 | lm loss: 4.875563E+00 | grad norm: 0.511 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 196.587 | TFLOPs: 47.57 | 7: iteration 640/ 2891 | consumed samples: 163840 | consumed tokens: 335544320 | elapsed time per iteration (s): 1.29 | learning rate: 1.805E-04 | global batch size: 256 | lm loss: 4.857310E+00 | grad norm: 0.662 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.718 | TFLOPs: 47.85 | 7: iteration 650/ 2891 | consumed samples: 166400 | consumed tokens: 340787200 | elapsed time per iteration (s): 1.29 | learning rate: 1.799E-04 | global batch size: 256 | lm loss: 4.812180E+00 | grad norm: 0.507 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.081 | TFLOPs: 48.18 | 7: iteration 660/ 2891 | consumed samples: 168960 | consumed tokens: 346030080 | elapsed time per iteration (s): 1.27 | learning rate: 1.793E-04 | global batch size: 256 | lm loss: 4.802442E+00 | grad norm: 0.428 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.347 | TFLOPs: 48.97 | 7: iteration 670/ 2891 | consumed samples: 171520 | consumed tokens: 351272960 | elapsed time per iteration (s): 1.31 | learning rate: 1.786E-04 | global batch size: 256 | lm loss: 4.730896E+00 | grad norm: 0.447 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 195.044 | TFLOPs: 47.20 | 7: iteration 680/ 2891 | consumed samples: 174080 | consumed tokens: 356515840 | elapsed time per iteration (s): 1.28 | learning rate: 1.780E-04 | global batch size: 256 | lm loss: 4.710181E+00 | grad norm: 0.680 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.806 | TFLOPs: 48.35 | 7: iteration 690/ 2891 | consumed samples: 176640 | consumed tokens: 361758720 | elapsed time per iteration (s): 1.28 | learning rate: 1.773E-04 | global batch size: 256 | lm loss: 4.698782E+00 | grad norm: 0.512 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.252 | TFLOPs: 48.46 | 7: iteration 700/ 2891 | consumed samples: 179200 | consumed tokens: 367001600 | elapsed time per iteration (s): 1.27 | learning rate: 1.767E-04 | global batch size: 256 | lm loss: 4.669154E+00 | grad norm: 0.423 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.351 | TFLOPs: 48.72 | 7: iteration 710/ 2891 | consumed samples: 181760 | consumed tokens: 372244480 | elapsed time per iteration (s): 1.27 | learning rate: 1.760E-04 | global batch size: 256 | lm loss: 4.596683E+00 | grad norm: 0.560 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.040 | TFLOPs: 48.65 | 7: iteration 720/ 2891 | consumed samples: 184320 | consumed tokens: 377487360 | elapsed time per iteration (s): 1.28 | learning rate: 1.753E-04 | global batch size: 256 | lm loss: 4.621252E+00 | grad norm: 0.496 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.597 | TFLOPs: 48.30 | 7: iteration 730/ 2891 | consumed samples: 186880 | consumed tokens: 382730240 | elapsed time per iteration (s): 1.27 | learning rate: 1.747E-04 | global batch size: 256 | lm loss: 4.562813E+00 | grad norm: 0.658 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.875 | TFLOPs: 48.85 | 7: iteration 740/ 2891 | consumed samples: 189440 | consumed tokens: 387973120 | elapsed time per iteration (s): 1.27 | learning rate: 1.740E-04 | global batch size: 256 | lm loss: 4.529232E+00 | grad norm: 0.341 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.787 | TFLOPs: 48.83 | 7: iteration 750/ 2891 | consumed samples: 192000 | consumed tokens: 393216000 | elapsed time per iteration (s): 1.27 | learning rate: 1.733E-04 | global batch size: 256 | lm loss: 4.463338E+00 | grad norm: 0.456 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.857 | TFLOPs: 48.85 | 7: iteration 760/ 2891 | consumed samples: 194560 | consumed tokens: 398458880 | elapsed time per iteration (s): 1.28 | learning rate: 1.726E-04 | global batch size: 256 | lm loss: 4.450971E+00 | grad norm: 0.753 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.411 | TFLOPs: 48.50 | 7: iteration 770/ 2891 | consumed samples: 197120 | consumed tokens: 403701760 | elapsed time per iteration (s): 1.27 | learning rate: 1.718E-04 | global batch size: 256 | lm loss: 4.549761E+00 | grad norm: 0.685 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.535 | TFLOPs: 48.77 | 7: iteration 780/ 2891 | consumed samples: 199680 | consumed tokens: 408944640 | elapsed time per iteration (s): 1.26 | learning rate: 1.711E-04 | global batch size: 256 | lm loss: 4.497304E+00 | grad norm: 0.433 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.414 | TFLOPs: 48.98 | 7: iteration 790/ 2891 | consumed samples: 202240 | consumed tokens: 414187520 | elapsed time per iteration (s): 1.27 | learning rate: 1.704E-04 | global batch size: 256 | lm loss: 4.435748E+00 | grad norm: 0.357 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.974 | TFLOPs: 48.88 | 7: iteration 800/ 2891 | consumed samples: 204800 | consumed tokens: 419430400 | elapsed time per iteration (s): 1.30 | learning rate: 1.697E-04 | global batch size: 256 | lm loss: 4.400716E+00 | grad norm: 0.364 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.262 | TFLOPs: 47.74 | 7: iteration 810/ 2891 | consumed samples: 207360 | consumed tokens: 424673280 | elapsed time per iteration (s): 1.30 | learning rate: 1.689E-04 | global batch size: 256 | lm loss: 4.347423E+00 | grad norm: 0.383 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.081 | TFLOPs: 47.69 | 7: iteration 820/ 2891 | consumed samples: 209920 | consumed tokens: 429916160 | elapsed time per iteration (s): 1.27 | learning rate: 1.682E-04 | global batch size: 256 | lm loss: 4.281096E+00 | grad norm: 0.467 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.212 | TFLOPs: 48.69 | 7: iteration 830/ 2891 | consumed samples: 212480 | consumed tokens: 435159040 | elapsed time per iteration (s): 1.27 | learning rate: 1.674E-04 | global batch size: 256 | lm loss: 4.317418E+00 | grad norm: 0.364 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.520 | TFLOPs: 48.77 | 7: iteration 840/ 2891 | consumed samples: 215040 | consumed tokens: 440401920 | elapsed time per iteration (s): 1.28 | learning rate: 1.666E-04 | global batch size: 256 | lm loss: 4.302997E+00 | grad norm: 0.456 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.254 | TFLOPs: 48.46 | 7: iteration 850/ 2891 | consumed samples: 217600 | consumed tokens: 445644800 | elapsed time per iteration (s): 1.27 | learning rate: 1.659E-04 | global batch size: 256 | lm loss: 4.240327E+00 | grad norm: 0.348 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.720 | TFLOPs: 48.81 | 7: iteration 860/ 2891 | consumed samples: 220160 | consumed tokens: 450887680 | elapsed time per iteration (s): 1.27 | learning rate: 1.651E-04 | global batch size: 256 | lm loss: 4.223909E+00 | grad norm: 0.405 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.486 | TFLOPs: 48.76 | 7: iteration 870/ 2891 | consumed samples: 222720 | consumed tokens: 456130560 | elapsed time per iteration (s): 1.28 | learning rate: 1.643E-04 | global batch size: 256 | lm loss: 4.213624E+00 | grad norm: 0.445 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.123 | TFLOPs: 48.43 | 7: iteration 880/ 2891 | consumed samples: 225280 | consumed tokens: 461373440 | elapsed time per iteration (s): 1.28 | learning rate: 1.635E-04 | global batch size: 256 | lm loss: 4.172175E+00 | grad norm: 0.568 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.310 | TFLOPs: 48.23 | 7: iteration 890/ 2891 | consumed samples: 227840 | consumed tokens: 466616320 | elapsed time per iteration (s): 1.30 | learning rate: 1.627E-04 | global batch size: 256 | lm loss: 4.170717E+00 | grad norm: 0.399 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 196.491 | TFLOPs: 47.55 | 7: iteration 900/ 2891 | consumed samples: 230400 | consumed tokens: 471859200 | elapsed time per iteration (s): 1.28 | learning rate: 1.619E-04 | global batch size: 256 | lm loss: 4.200544E+00 | grad norm: 0.405 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.452 | TFLOPs: 48.27 | 7: iteration 910/ 2891 | consumed samples: 232960 | consumed tokens: 477102080 | elapsed time per iteration (s): 1.28 | learning rate: 1.611E-04 | global batch size: 256 | lm loss: 4.168121E+00 | grad norm: 0.405 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.133 | TFLOPs: 48.43 | 7: iteration 920/ 2891 | consumed samples: 235520 | consumed tokens: 482344960 | elapsed time per iteration (s): 1.28 | learning rate: 1.603E-04 | global batch size: 256 | lm loss: 4.109627E+00 | grad norm: 0.443 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.008 | TFLOPs: 48.40 | 7: iteration 930/ 2891 | consumed samples: 238080 | consumed tokens: 487587840 | elapsed time per iteration (s): 1.27 | learning rate: 1.595E-04 | global batch size: 256 | lm loss: 4.100901E+00 | grad norm: 0.518 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.217 | TFLOPs: 48.93 | 7: iteration 940/ 2891 | consumed samples: 240640 | consumed tokens: 492830720 | elapsed time per iteration (s): 1.26 | learning rate: 1.586E-04 | global batch size: 256 | lm loss: 4.072250E+00 | grad norm: 0.295 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.567 | TFLOPs: 49.02 | 7: iteration 950/ 2891 | consumed samples: 243200 | consumed tokens: 498073600 | elapsed time per iteration (s): 1.27 | learning rate: 1.578E-04 | global batch size: 256 | lm loss: 4.063911E+00 | grad norm: 0.356 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.738 | TFLOPs: 48.82 | 7: iteration 960/ 2891 | consumed samples: 245760 | consumed tokens: 503316480 | elapsed time per iteration (s): 1.29 | learning rate: 1.570E-04 | global batch size: 256 | lm loss: 4.049501E+00 | grad norm: 0.370 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.392 | TFLOPs: 48.01 | 7: iteration 970/ 2891 | consumed samples: 248320 | consumed tokens: 508559360 | elapsed time per iteration (s): 1.30 | learning rate: 1.561E-04 | global batch size: 256 | lm loss: 4.044304E+00 | grad norm: 0.321 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.643 | TFLOPs: 47.83 | 7: iteration 980/ 2891 | consumed samples: 250880 | consumed tokens: 513802240 | elapsed time per iteration (s): 1.26 | learning rate: 1.553E-04 | global batch size: 256 | lm loss: 4.052804E+00 | grad norm: 0.406 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.686 | TFLOPs: 49.05 | 7: iteration 990/ 2891 | consumed samples: 253440 | consumed tokens: 519045120 | elapsed time per iteration (s): 1.29 | learning rate: 1.544E-04 | global batch size: 256 | lm loss: 4.015865E+00 | grad norm: 0.558 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.096 | TFLOPs: 47.94 | 7: iteration 1000/ 2891 | consumed samples: 256000 | consumed tokens: 524288000 | elapsed time per iteration (s): 1.28 | learning rate: 1.536E-04 | global batch size: 256 | lm loss: 4.052459E+00 | grad norm: 0.405 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.570 | TFLOPs: 48.29 | 7: ------------------------------------------------------------------------------------------ 7: valid loss at iteration 1000 | lm loss value: 3.881743E+00 | lm loss PPL: 4.850871E+01 | 7: ------------------------------------------------------------------------------------------ 0: saving checkpoint at iteration 1000 to checkpoints_1b1oscar 0: [2022-11-29 18:14:07,843] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step1000 is begin to save! 0: [2022-11-29 18:14:07,977] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_01-model_00-model_states.pt... 0: [2022-11-29 18:14:08,237] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_01-model_00-model_states.pt. 0: [2022-11-29 18:14:08,238] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_03-model_00-model_states.pt... 0: [2022-11-29 18:14:08,319] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_03-model_00-model_states.pt. 0: [2022-11-29 18:14:08,320] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_04-model_00-model_states.pt... 0: [2022-11-29 18:14:08,395] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_04-model_00-model_states.pt. 0: [2022-11-29 18:14:08,396] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_05-model_00-model_states.pt... 0: [2022-11-29 18:14:08,471] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_05-model_00-model_states.pt. 0: [2022-11-29 18:14:08,472] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_06-model_00-model_states.pt... 0: [2022-11-29 18:14:08,547] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_06-model_00-model_states.pt. 0: [2022-11-29 18:14:08,547] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_07-model_00-model_states.pt... 0: [2022-11-29 18:14:08,623] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_07-model_00-model_states.pt. 0: [2022-11-29 18:14:08,623] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_08-model_00-model_states.pt... 0: [2022-11-29 18:14:08,697] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_08-model_00-model_states.pt. 0: [2022-11-29 18:14:08,698] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_09-model_00-model_states.pt... 0: [2022-11-29 18:14:08,770] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_09-model_00-model_states.pt. 0: [2022-11-29 18:14:08,770] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_10-model_00-model_states.pt... 0: [2022-11-29 18:14:08,848] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_10-model_00-model_states.pt. 0: [2022-11-29 18:14:08,848] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_11-model_00-model_states.pt... 0: [2022-11-29 18:14:08,926] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_11-model_00-model_states.pt. 0: [2022-11-29 18:14:08,926] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_12-model_00-model_states.pt... 0: [2022-11-29 18:14:09,002] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_12-model_00-model_states.pt. 0: [2022-11-29 18:14:09,003] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_13-model_00-model_states.pt... 0: [2022-11-29 18:14:09,114] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_13-model_00-model_states.pt. 0: [2022-11-29 18:14:09,115] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_14-model_00-model_states.pt... 0: [2022-11-29 18:14:09,189] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_14-model_00-model_states.pt. 0: [2022-11-29 18:14:09,190] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_15-model_00-model_states.pt... 0: [2022-11-29 18:14:09,266] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_15-model_00-model_states.pt. 0: [2022-11-29 18:14:09,266] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_16-model_00-model_states.pt... 0: [2022-11-29 18:14:09,342] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_16-model_00-model_states.pt. 0: [2022-11-29 18:14:09,342] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_17-model_00-model_states.pt... 0: [2022-11-29 18:14:09,417] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_17-model_00-model_states.pt. 0: [2022-11-29 18:14:09,418] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_18-model_00-model_states.pt... 0: [2022-11-29 18:14:09,490] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_18-model_00-model_states.pt. 0: [2022-11-29 18:14:09,490] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_19-model_00-model_states.pt... 0: [2022-11-29 18:14:09,564] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_19-model_00-model_states.pt. 0: [2022-11-29 18:14:09,565] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_20-model_00-model_states.pt... 0: [2022-11-29 18:14:09,641] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_20-model_00-model_states.pt. 0: [2022-11-29 18:14:09,642] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_21-model_00-model_states.pt... 0: [2022-11-29 18:14:09,717] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_21-model_00-model_states.pt. 0: [2022-11-29 18:14:09,717] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_22-model_00-model_states.pt... 0: [2022-11-29 18:14:09,793] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_22-model_00-model_states.pt. 0: [2022-11-29 18:14:09,793] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_23-model_00-model_states.pt... 0: [2022-11-29 18:14:09,867] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_23-model_00-model_states.pt. 0: [2022-11-29 18:14:09,867] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_24-model_00-model_states.pt... 0: [2022-11-29 18:14:09,945] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_24-model_00-model_states.pt. 0: [2022-11-29 18:14:09,945] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_25-model_00-model_states.pt... 0: [2022-11-29 18:14:10,021] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_25-model_00-model_states.pt. 0: [2022-11-29 18:14:10,021] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_26-model_00-model_states.pt... 0: [2022-11-29 18:14:10,094] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_26-model_00-model_states.pt. 0: [2022-11-29 18:14:10,094] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_27-model_00-model_states.pt... 0: [2022-11-29 18:14:10,168] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_27-model_00-model_states.pt. 0: [2022-11-29 18:14:10,169] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_28-model_00-model_states.pt... 0: [2022-11-29 18:14:10,245] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_28-model_00-model_states.pt. 0: [2022-11-29 18:14:10,245] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/layer_30-model_00-model_states.pt... 0: [2022-11-29 18:14:10,247] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/layer_30-model_00-model_states.pt. 0: [2022-11-29 18:14:10,248] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: checkpoints_1b1oscar/global_step1000/mp_rank_00_model_states.pt 0: [2022-11-29 18:14:10,248] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/mp_rank_00_model_states.pt... 0: [2022-11-29 18:14:10,256] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/mp_rank_00_model_states.pt. 0: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:14:10,278] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:14:10,533] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:14:10,540] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:14:10,540] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt 0: [2022-11-29 18:14:10,540] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 0: [2022-11-29 18:14:10,550] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:14:10,550] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt 0: [2022-11-29 18:14:10,550] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 0: [2022-11-29 18:14:10,551] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:14:10,551] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt 0: [2022-11-29 18:14:10,551] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 0: [2022-11-29 18:14:10,558] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:14:10,559] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt 0: [2022-11-29 18:14:10,559] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 0: [2022-11-29 18:14:10,570] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:14:10,570] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt 0: [2022-11-29 18:14:10,570] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 0: [2022-11-29 18:14:10,586] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:14:10,586] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt 0: [2022-11-29 18:14:10,586] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 0: [2022-11-29 18:14:10,586] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:14:10,586] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt 0: [2022-11-29 18:14:10,587] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 4: [2022-11-29 18:14:10,597] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:14:10,597] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt 4: [2022-11-29 18:14:10,597] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 2: [2022-11-29 18:14:10,597] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:14:10,607] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:14:10,607] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt 4: [2022-11-29 18:14:10,607] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 4: [2022-11-29 18:14:10,615] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:14:10,615] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt 4: [2022-11-29 18:14:10,615] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 4: [2022-11-29 18:14:10,616] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:14:10,616] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt 4: [2022-11-29 18:14:10,616] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 4: [2022-11-29 18:14:10,616] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:14:10,616] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt 4: [2022-11-29 18:14:10,616] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 4: [2022-11-29 18:14:10,619] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:14:10,619] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:14:10,619] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:14:10,619] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt 4: [2022-11-29 18:14:10,619] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt 4: [2022-11-29 18:14:10,619] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt 4: [2022-11-29 18:14:10,619] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 4: [2022-11-29 18:14:10,619] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 4: [2022-11-29 18:14:10,619] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 2: [2022-11-29 18:14:10,598] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt 2: [2022-11-29 18:14:10,598] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 2: [2022-11-29 18:14:10,598] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt. 2: [2022-11-29 18:14:10,598] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt 2: [2022-11-29 18:14:10,598] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 2: [2022-11-29 18:14:10,599] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt. 2: [2022-11-29 18:14:10,599] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt 2: [2022-11-29 18:14:10,599] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 6: [2022-11-29 18:14:10,629] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:14:10,629] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:14:10,629] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:14:10,629] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt 6: [2022-11-29 18:14:10,629] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt 6: [2022-11-29 18:14:10,629] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt 6: [2022-11-29 18:14:10,629] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 6: [2022-11-29 18:14:10,629] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 6: [2022-11-29 18:14:10,629] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 6: [2022-11-29 18:14:10,634] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:14:10,634] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt 6: [2022-11-29 18:14:10,634] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 6: [2022-11-29 18:14:10,637] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:14:10,637] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt 6: [2022-11-29 18:14:10,637] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 2: [2022-11-29 18:14:10,625] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt. 2: [2022-11-29 18:14:10,625] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt 2: [2022-11-29 18:14:10,625] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 2: [2022-11-29 18:14:10,625] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt. 2: [2022-11-29 18:14:10,625] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt 2: [2022-11-29 18:14:10,625] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 2: [2022-11-29 18:14:10,626] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt. 2: [2022-11-29 18:14:10,626] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt 2: [2022-11-29 18:14:10,626] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 2: [2022-11-29 18:14:10,626] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt. 2: [2022-11-29 18:14:10,626] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt 2: [2022-11-29 18:14:10,626] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 6: [2022-11-29 18:14:10,644] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:14:10,644] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:14:10,644] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt 6: [2022-11-29 18:14:10,644] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt 6: [2022-11-29 18:14:10,644] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 6: [2022-11-29 18:14:10,645] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 6: [2022-11-29 18:14:10,645] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:14:10,645] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt 6: [2022-11-29 18:14:10,645] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 3: [2022-11-29 18:14:10,665] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt. 3: [2022-11-29 18:14:10,665] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt 3: [2022-11-29 18:14:10,665] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 3: [2022-11-29 18:14:10,665] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt. 3: [2022-11-29 18:14:10,666] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt 3: [2022-11-29 18:14:10,666] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 3: [2022-11-29 18:14:10,666] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt. 3: [2022-11-29 18:14:10,666] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt 3: [2022-11-29 18:14:10,666] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 3: [2022-11-29 18:14:10,689] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt. 3: [2022-11-29 18:14:10,689] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt 3: [2022-11-29 18:14:10,689] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 3: [2022-11-29 18:14:10,690] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt. 3: [2022-11-29 18:14:10,690] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt 3: [2022-11-29 18:14:10,690] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 3: [2022-11-29 18:14:10,690] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt. 3: [2022-11-29 18:14:10,691] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt 3: [2022-11-29 18:14:10,691] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 3: [2022-11-29 18:14:10,691] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt. 3: [2022-11-29 18:14:10,691] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt 3: [2022-11-29 18:14:10,691] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 2: [2022-11-29 18:14:10,714] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt. 2: [2022-11-29 18:14:10,714] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt 2: [2022-11-29 18:14:10,714] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 3: [2022-11-29 18:14:10,775] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt. 3: [2022-11-29 18:14:10,775] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt 3: [2022-11-29 18:14:10,775] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 7: [2022-11-29 18:14:10,797] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:14:10,797] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt 7: [2022-11-29 18:14:10,797] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 7: [2022-11-29 18:14:10,798] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:14:10,798] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt 7: [2022-11-29 18:14:10,798] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 7: [2022-11-29 18:14:10,799] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:14:10,799] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt 7: [2022-11-29 18:14:10,799] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 7: [2022-11-29 18:14:10,799] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:14:10,799] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt 7: [2022-11-29 18:14:10,799] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 7: [2022-11-29 18:14:10,800] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:14:10,800] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:14:10,800] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:14:10,800] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt 7: [2022-11-29 18:14:10,800] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt 7: [2022-11-29 18:14:10,800] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt 7: [2022-11-29 18:14:10,800] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 7: [2022-11-29 18:14:10,800] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 7: [2022-11-29 18:14:10,800] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 1: [2022-11-29 18:14:10,836] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:14:10,836] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:14:10,836] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:14:10,836] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt 1: [2022-11-29 18:14:10,836] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt 1: [2022-11-29 18:14:10,836] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt 1: [2022-11-29 18:14:10,836] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 1: [2022-11-29 18:14:10,836] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:14:10,836] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 1: [2022-11-29 18:14:10,836] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 1: [2022-11-29 18:14:10,836] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:14:10,836] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:14:10,836] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt 1: [2022-11-29 18:14:10,836] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt 1: [2022-11-29 18:14:10,836] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt 1: [2022-11-29 18:14:10,836] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 1: [2022-11-29 18:14:10,836] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 1: [2022-11-29 18:14:10,836] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 7: [2022-11-29 18:14:10,858] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:14:10,858] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt 7: [2022-11-29 18:14:10,858] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 0: [2022-11-29 18:14:10,860] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt 0: [2022-11-29 18:14:10,860] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 5: [2022-11-29 18:14:10,920] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:14:10,920] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt 5: [2022-11-29 18:14:10,920] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 5: [2022-11-29 18:14:10,920] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:14:10,920] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:14:10,920] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt 5: [2022-11-29 18:14:10,920] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt 5: [2022-11-29 18:14:10,920] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 5: [2022-11-29 18:14:10,920] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 5: [2022-11-29 18:14:10,927] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:14:10,927] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt 5: [2022-11-29 18:14:10,927] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 5: [2022-11-29 18:14:10,928] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:14:10,928] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:14:10,928] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:14:10,928] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:14:10,928] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt 5: [2022-11-29 18:14:10,928] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt 5: [2022-11-29 18:14:10,928] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt 5: [2022-11-29 18:14:10,928] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt 5: [2022-11-29 18:14:10,928] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 5: [2022-11-29 18:14:10,928] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 5: [2022-11-29 18:14:10,928] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 5: [2022-11-29 18:14:10,928] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 1: [2022-11-29 18:14:10,956] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:14:10,957] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt 1: [2022-11-29 18:14:10,957] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 1: [2022-11-29 18:14:10,964] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:14:10,964] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step1000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt 1: [2022-11-29 18:14:10,964] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 0: successfully saved checkpoint at iteration 1000 to checkpoints_1b1oscar 7: time (ms) | save-checkpoint: 3129.32 7: iteration 1010/ 2891 | consumed samples: 258560 | consumed tokens: 529530880 | elapsed time per iteration (s): 1.64 | learning rate: 1.527E-04 | global batch size: 256 | lm loss: 3.996400E+00 | grad norm: 0.481 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 156.540 | TFLOPs: 37.88 | 7: iteration 1020/ 2891 | consumed samples: 261120 | consumed tokens: 534773760 | elapsed time per iteration (s): 1.26 | learning rate: 1.518E-04 | global batch size: 256 | lm loss: 4.009910E+00 | grad norm: 0.369 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.599 | TFLOPs: 49.03 | 7: iteration 1030/ 2891 | consumed samples: 263680 | consumed tokens: 540016640 | elapsed time per iteration (s): 1.27 | learning rate: 1.509E-04 | global batch size: 256 | lm loss: 3.982069E+00 | grad norm: 0.358 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.359 | TFLOPs: 48.73 | 7: iteration 1040/ 2891 | consumed samples: 266240 | consumed tokens: 545259520 | elapsed time per iteration (s): 1.28 | learning rate: 1.501E-04 | global batch size: 256 | lm loss: 3.940005E+00 | grad norm: 0.369 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.621 | TFLOPs: 48.55 | 7: iteration 1050/ 2891 | consumed samples: 268800 | consumed tokens: 550502400 | elapsed time per iteration (s): 1.27 | learning rate: 1.492E-04 | global batch size: 256 | lm loss: 3.942002E+00 | grad norm: 0.403 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.508 | TFLOPs: 48.76 | 7: iteration 1060/ 2891 | consumed samples: 271360 | consumed tokens: 555745280 | elapsed time per iteration (s): 1.26 | learning rate: 1.483E-04 | global batch size: 256 | lm loss: 3.936320E+00 | grad norm: 0.419 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.655 | TFLOPs: 49.04 | 7: iteration 1070/ 2891 | consumed samples: 273920 | consumed tokens: 560988160 | elapsed time per iteration (s): 1.26 | learning rate: 1.474E-04 | global batch size: 256 | lm loss: 3.919482E+00 | grad norm: 0.321 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.552 | TFLOPs: 49.02 | 7: iteration 1080/ 2891 | consumed samples: 276480 | consumed tokens: 566231040 | elapsed time per iteration (s): 1.27 | learning rate: 1.465E-04 | global batch size: 256 | lm loss: 3.901269E+00 | grad norm: 0.335 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.907 | TFLOPs: 48.86 | 7: iteration 1090/ 2891 | consumed samples: 279040 | consumed tokens: 571473920 | elapsed time per iteration (s): 1.27 | learning rate: 1.456E-04 | global batch size: 256 | lm loss: 3.906593E+00 | grad norm: 0.522 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.336 | TFLOPs: 48.72 | 7: iteration 1100/ 2891 | consumed samples: 281600 | consumed tokens: 576716800 | elapsed time per iteration (s): 1.27 | learning rate: 1.447E-04 | global batch size: 256 | lm loss: 3.888079E+00 | grad norm: 0.297 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.856 | TFLOPs: 48.85 | 7: iteration 1110/ 2891 | consumed samples: 284160 | consumed tokens: 581959680 | elapsed time per iteration (s): 1.27 | learning rate: 1.438E-04 | global batch size: 256 | lm loss: 3.837761E+00 | grad norm: 0.311 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.764 | TFLOPs: 48.82 | 7: iteration 1120/ 2891 | consumed samples: 286720 | consumed tokens: 587202560 | elapsed time per iteration (s): 1.26 | learning rate: 1.428E-04 | global batch size: 256 | lm loss: 3.852570E+00 | grad norm: 0.424 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.667 | TFLOPs: 49.04 | 7: iteration 1130/ 2891 | consumed samples: 289280 | consumed tokens: 592445440 | elapsed time per iteration (s): 1.27 | learning rate: 1.419E-04 | global batch size: 256 | lm loss: 3.841681E+00 | grad norm: 0.332 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.243 | TFLOPs: 48.94 | 7: iteration 1140/ 2891 | consumed samples: 291840 | consumed tokens: 597688320 | elapsed time per iteration (s): 1.28 | learning rate: 1.410E-04 | global batch size: 256 | lm loss: 3.849986E+00 | grad norm: 0.370 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.721 | TFLOPs: 48.33 | 7: iteration 1150/ 2891 | consumed samples: 294400 | consumed tokens: 602931200 | elapsed time per iteration (s): 1.27 | learning rate: 1.401E-04 | global batch size: 256 | lm loss: 3.826727E+00 | grad norm: 0.328 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.036 | TFLOPs: 48.65 | 7: iteration 1160/ 2891 | consumed samples: 296960 | consumed tokens: 608174080 | elapsed time per iteration (s): 1.27 | learning rate: 1.391E-04 | global batch size: 256 | lm loss: 3.842875E+00 | grad norm: 0.311 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.792 | TFLOPs: 48.83 | 7: iteration 1170/ 2891 | consumed samples: 299520 | consumed tokens: 613416960 | elapsed time per iteration (s): 1.29 | learning rate: 1.382E-04 | global batch size: 256 | lm loss: 3.828712E+00 | grad norm: 0.397 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.131 | TFLOPs: 47.95 | 7: iteration 1180/ 2891 | consumed samples: 302080 | consumed tokens: 618659840 | elapsed time per iteration (s): 1.26 | learning rate: 1.372E-04 | global batch size: 256 | lm loss: 3.866740E+00 | grad norm: 1.616 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.516 | TFLOPs: 49.01 | 7: iteration 1190/ 2891 | consumed samples: 304640 | consumed tokens: 623902720 | elapsed time per iteration (s): 1.33 | learning rate: 1.363E-04 | global batch size: 256 | lm loss: 3.962951E+00 | grad norm: 0.722 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 191.885 | TFLOPs: 46.43 | 7: iteration 1200/ 2891 | consumed samples: 307200 | consumed tokens: 629145600 | elapsed time per iteration (s): 1.28 | learning rate: 1.354E-04 | global batch size: 256 | lm loss: 3.884757E+00 | grad norm: 0.343 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.930 | TFLOPs: 48.38 | 7: iteration 1210/ 2891 | consumed samples: 309760 | consumed tokens: 634388480 | elapsed time per iteration (s): 1.28 | learning rate: 1.344E-04 | global batch size: 256 | lm loss: 3.831664E+00 | grad norm: 0.267 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.369 | TFLOPs: 48.25 | 7: iteration 1220/ 2891 | consumed samples: 312320 | consumed tokens: 639631360 | elapsed time per iteration (s): 1.29 | learning rate: 1.335E-04 | global batch size: 256 | lm loss: 3.811860E+00 | grad norm: 0.286 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.972 | TFLOPs: 48.15 | 7: iteration 1230/ 2891 | consumed samples: 314880 | consumed tokens: 644874240 | elapsed time per iteration (s): 1.27 | learning rate: 1.325E-04 | global batch size: 256 | lm loss: 3.793566E+00 | grad norm: 0.284 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.663 | TFLOPs: 48.80 | 7: iteration 1240/ 2891 | consumed samples: 317440 | consumed tokens: 650117120 | elapsed time per iteration (s): 1.28 | learning rate: 1.315E-04 | global batch size: 256 | lm loss: 3.810783E+00 | grad norm: 0.281 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.884 | TFLOPs: 48.37 | 7: iteration 1250/ 2891 | consumed samples: 320000 | consumed tokens: 655360000 | elapsed time per iteration (s): 1.27 | learning rate: 1.306E-04 | global batch size: 256 | lm loss: 3.774150E+00 | grad norm: 0.308 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.684 | TFLOPs: 48.81 | 7: iteration 1260/ 2891 | consumed samples: 322560 | consumed tokens: 660602880 | elapsed time per iteration (s): 1.27 | learning rate: 1.296E-04 | global batch size: 256 | lm loss: 3.774349E+00 | grad norm: 0.306 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.735 | TFLOPs: 48.82 | 7: iteration 1270/ 2891 | consumed samples: 325120 | consumed tokens: 665845760 | elapsed time per iteration (s): 1.27 | learning rate: 1.287E-04 | global batch size: 256 | lm loss: 3.763657E+00 | grad norm: 0.351 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.916 | TFLOPs: 48.62 | 7: iteration 1280/ 2891 | consumed samples: 327680 | consumed tokens: 671088640 | elapsed time per iteration (s): 1.27 | learning rate: 1.277E-04 | global batch size: 256 | lm loss: 3.749620E+00 | grad norm: 0.284 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.071 | TFLOPs: 48.90 | 7: iteration 1290/ 2891 | consumed samples: 330240 | consumed tokens: 676331520 | elapsed time per iteration (s): 1.27 | learning rate: 1.267E-04 | global batch size: 256 | lm loss: 3.735962E+00 | grad norm: 0.276 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.882 | TFLOPs: 48.61 | 7: iteration 1300/ 2891 | consumed samples: 332800 | consumed tokens: 681574400 | elapsed time per iteration (s): 1.27 | learning rate: 1.258E-04 | global batch size: 256 | lm loss: 3.754087E+00 | grad norm: 0.337 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.037 | TFLOPs: 48.89 | 7: iteration 1310/ 2891 | consumed samples: 335360 | consumed tokens: 686817280 | elapsed time per iteration (s): 1.28 | learning rate: 1.248E-04 | global batch size: 256 | lm loss: 3.736427E+00 | grad norm: 0.313 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.686 | TFLOPs: 48.56 | 7: iteration 1320/ 2891 | consumed samples: 337920 | consumed tokens: 692060160 | elapsed time per iteration (s): 1.28 | learning rate: 1.238E-04 | global batch size: 256 | lm loss: 3.726226E+00 | grad norm: 0.292 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.494 | TFLOPs: 48.52 | 7: iteration 1330/ 2891 | consumed samples: 340480 | consumed tokens: 697303040 | elapsed time per iteration (s): 1.30 | learning rate: 1.228E-04 | global batch size: 256 | lm loss: 3.697341E+00 | grad norm: 0.345 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.198 | TFLOPs: 47.72 | 7: iteration 1340/ 2891 | consumed samples: 343040 | consumed tokens: 702545920 | elapsed time per iteration (s): 1.30 | learning rate: 1.218E-04 | global batch size: 256 | lm loss: 3.695453E+00 | grad norm: 0.325 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 196.504 | TFLOPs: 47.55 | 7: iteration 1350/ 2891 | consumed samples: 345600 | consumed tokens: 707788800 | elapsed time per iteration (s): 1.27 | learning rate: 1.209E-04 | global batch size: 256 | lm loss: 3.699658E+00 | grad norm: 0.330 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.928 | TFLOPs: 48.86 | 7: iteration 1360/ 2891 | consumed samples: 348160 | consumed tokens: 713031680 | elapsed time per iteration (s): 1.27 | learning rate: 1.199E-04 | global batch size: 256 | lm loss: 3.695967E+00 | grad norm: 0.437 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.434 | TFLOPs: 48.74 | 7: iteration 1370/ 2891 | consumed samples: 350720 | consumed tokens: 718274560 | elapsed time per iteration (s): 1.27 | learning rate: 1.189E-04 | global batch size: 256 | lm loss: 3.675379E+00 | grad norm: 0.310 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.339 | TFLOPs: 48.96 | 7: iteration 1380/ 2891 | consumed samples: 353280 | consumed tokens: 723517440 | elapsed time per iteration (s): 1.28 | learning rate: 1.179E-04 | global batch size: 256 | lm loss: 3.667464E+00 | grad norm: 0.278 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.286 | TFLOPs: 48.47 | 7: iteration 1390/ 2891 | consumed samples: 355840 | consumed tokens: 728760320 | elapsed time per iteration (s): 1.27 | learning rate: 1.169E-04 | global batch size: 256 | lm loss: 3.661503E+00 | grad norm: 0.310 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.447 | TFLOPs: 48.75 | 7: iteration 1400/ 2891 | consumed samples: 358400 | consumed tokens: 734003200 | elapsed time per iteration (s): 1.27 | learning rate: 1.160E-04 | global batch size: 256 | lm loss: 3.677907E+00 | grad norm: 0.313 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.015 | TFLOPs: 48.89 | 7: iteration 1410/ 2891 | consumed samples: 360960 | consumed tokens: 739246080 | elapsed time per iteration (s): 1.27 | learning rate: 1.150E-04 | global batch size: 256 | lm loss: 3.661739E+00 | grad norm: 0.298 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.174 | TFLOPs: 48.68 | 7: iteration 1420/ 2891 | consumed samples: 363520 | consumed tokens: 744488960 | elapsed time per iteration (s): 1.27 | learning rate: 1.140E-04 | global batch size: 256 | lm loss: 3.643872E+00 | grad norm: 0.324 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.043 | TFLOPs: 48.89 | 7: iteration 1430/ 2891 | consumed samples: 366080 | consumed tokens: 749731840 | elapsed time per iteration (s): 1.27 | learning rate: 1.130E-04 | global batch size: 256 | lm loss: 3.680123E+00 | grad norm: 0.299 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.754 | TFLOPs: 48.82 | 7: iteration 1440/ 2891 | consumed samples: 368640 | consumed tokens: 754974720 | elapsed time per iteration (s): 1.27 | learning rate: 1.120E-04 | global batch size: 256 | lm loss: 3.638352E+00 | grad norm: 0.273 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.471 | TFLOPs: 48.75 | 7: iteration 1450/ 2891 | consumed samples: 371200 | consumed tokens: 760217600 | elapsed time per iteration (s): 1.27 | learning rate: 1.110E-04 | global batch size: 256 | lm loss: 3.658081E+00 | grad norm: 0.290 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.263 | TFLOPs: 48.70 | 7: iteration 1460/ 2891 | consumed samples: 373760 | consumed tokens: 765460480 | elapsed time per iteration (s): 1.27 | learning rate: 1.100E-04 | global batch size: 256 | lm loss: 3.639560E+00 | grad norm: 0.312 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.982 | TFLOPs: 48.88 | 7: iteration 1470/ 2891 | consumed samples: 376320 | consumed tokens: 770703360 | elapsed time per iteration (s): 1.26 | learning rate: 1.090E-04 | global batch size: 256 | lm loss: 3.652111E+00 | grad norm: 0.313 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.642 | TFLOPs: 49.04 | 7: iteration 1480/ 2891 | consumed samples: 378880 | consumed tokens: 775946240 | elapsed time per iteration (s): 1.27 | learning rate: 1.081E-04 | global batch size: 256 | lm loss: 3.657534E+00 | grad norm: 0.286 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.979 | TFLOPs: 48.88 | 7: iteration 1490/ 2891 | consumed samples: 381440 | consumed tokens: 781189120 | elapsed time per iteration (s): 1.26 | learning rate: 1.071E-04 | global batch size: 256 | lm loss: 3.601139E+00 | grad norm: 0.334 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.408 | TFLOPs: 48.98 | 7: iteration 1500/ 2891 | consumed samples: 384000 | consumed tokens: 786432000 | elapsed time per iteration (s): 1.27 | learning rate: 1.061E-04 | global batch size: 256 | lm loss: 3.618388E+00 | grad norm: 0.351 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.004 | TFLOPs: 48.88 | 7: iteration 1510/ 2891 | consumed samples: 386560 | consumed tokens: 791674880 | elapsed time per iteration (s): 1.26 | learning rate: 1.051E-04 | global batch size: 256 | lm loss: 3.635867E+00 | grad norm: 0.313 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.663 | TFLOPs: 49.04 | 7: iteration 1520/ 2891 | consumed samples: 389120 | consumed tokens: 796917760 | elapsed time per iteration (s): 1.27 | learning rate: 1.041E-04 | global batch size: 256 | lm loss: 3.604443E+00 | grad norm: 0.289 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.362 | TFLOPs: 48.97 | 7: iteration 1530/ 2891 | consumed samples: 391680 | consumed tokens: 802160640 | elapsed time per iteration (s): 1.26 | learning rate: 1.031E-04 | global batch size: 256 | lm loss: 3.589257E+00 | grad norm: 0.283 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.565 | TFLOPs: 49.02 | 7: iteration 1540/ 2891 | consumed samples: 394240 | consumed tokens: 807403520 | elapsed time per iteration (s): 1.27 | learning rate: 1.021E-04 | global batch size: 256 | lm loss: 3.625220E+00 | grad norm: 0.331 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.572 | TFLOPs: 48.78 | 7: iteration 1550/ 2891 | consumed samples: 396800 | consumed tokens: 812646400 | elapsed time per iteration (s): 1.26 | learning rate: 1.012E-04 | global batch size: 256 | lm loss: 3.578123E+00 | grad norm: 0.275 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.650 | TFLOPs: 49.04 | 7: iteration 1560/ 2891 | consumed samples: 399360 | consumed tokens: 817889280 | elapsed time per iteration (s): 1.27 | learning rate: 1.002E-04 | global batch size: 256 | lm loss: 3.570392E+00 | grad norm: 0.282 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.324 | TFLOPs: 48.96 | 7: iteration 1570/ 2891 | consumed samples: 401920 | consumed tokens: 823132160 | elapsed time per iteration (s): 1.27 | learning rate: 9.919E-05 | global batch size: 256 | lm loss: 3.570094E+00 | grad norm: 0.305 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.603 | TFLOPs: 48.79 | 7: iteration 1580/ 2891 | consumed samples: 404480 | consumed tokens: 828375040 | elapsed time per iteration (s): 1.27 | learning rate: 9.821E-05 | global batch size: 256 | lm loss: 3.544429E+00 | grad norm: 0.325 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.902 | TFLOPs: 48.86 | 7: iteration 1590/ 2891 | consumed samples: 407040 | consumed tokens: 833617920 | elapsed time per iteration (s): 1.27 | learning rate: 9.723E-05 | global batch size: 256 | lm loss: 3.551081E+00 | grad norm: 0.284 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.404 | TFLOPs: 48.74 | 7: iteration 1600/ 2891 | consumed samples: 409600 | consumed tokens: 838860800 | elapsed time per iteration (s): 1.27 | learning rate: 9.626E-05 | global batch size: 256 | lm loss: 3.587671E+00 | grad norm: 0.299 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.038 | TFLOPs: 48.89 | 7: iteration 1610/ 2891 | consumed samples: 412160 | consumed tokens: 844103680 | elapsed time per iteration (s): 1.28 | learning rate: 9.528E-05 | global batch size: 256 | lm loss: 3.555887E+00 | grad norm: 0.279 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.590 | TFLOPs: 48.54 | 7: iteration 1620/ 2891 | consumed samples: 414720 | consumed tokens: 849346560 | elapsed time per iteration (s): 1.26 | learning rate: 9.431E-05 | global batch size: 256 | lm loss: 3.570174E+00 | grad norm: 0.328 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.540 | TFLOPs: 49.01 | 7: iteration 1630/ 2891 | consumed samples: 417280 | consumed tokens: 854589440 | elapsed time per iteration (s): 1.26 | learning rate: 9.334E-05 | global batch size: 256 | lm loss: 3.569979E+00 | grad norm: 0.286 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.403 | TFLOPs: 48.98 | 7: iteration 1640/ 2891 | consumed samples: 419840 | consumed tokens: 859832320 | elapsed time per iteration (s): 1.26 | learning rate: 9.237E-05 | global batch size: 256 | lm loss: 3.556800E+00 | grad norm: 0.295 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.757 | TFLOPs: 49.07 | 7: iteration 1650/ 2891 | consumed samples: 422400 | consumed tokens: 865075200 | elapsed time per iteration (s): 1.27 | learning rate: 9.140E-05 | global batch size: 256 | lm loss: 3.566114E+00 | grad norm: 0.288 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.329 | TFLOPs: 48.72 | 7: iteration 1660/ 2891 | consumed samples: 424960 | consumed tokens: 870318080 | elapsed time per iteration (s): 1.27 | learning rate: 9.043E-05 | global batch size: 256 | lm loss: 3.557945E+00 | grad norm: 0.308 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.940 | TFLOPs: 48.63 | 7: iteration 1670/ 2891 | consumed samples: 427520 | consumed tokens: 875560960 | elapsed time per iteration (s): 1.26 | learning rate: 8.947E-05 | global batch size: 256 | lm loss: 3.550856E+00 | grad norm: 0.346 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.665 | TFLOPs: 49.04 | 7: iteration 1680/ 2891 | consumed samples: 430080 | consumed tokens: 880803840 | elapsed time per iteration (s): 1.27 | learning rate: 8.851E-05 | global batch size: 256 | lm loss: 3.521834E+00 | grad norm: 0.270 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.601 | TFLOPs: 48.79 | 7: iteration 1690/ 2891 | consumed samples: 432640 | consumed tokens: 886046720 | elapsed time per iteration (s): 1.27 | learning rate: 8.755E-05 | global batch size: 256 | lm loss: 3.559994E+00 | grad norm: 0.329 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.066 | TFLOPs: 48.90 | 7: iteration 1700/ 2891 | consumed samples: 435200 | consumed tokens: 891289600 | elapsed time per iteration (s): 1.27 | learning rate: 8.660E-05 | global batch size: 256 | lm loss: 3.521171E+00 | grad norm: 0.329 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.545 | TFLOPs: 48.77 | 7: iteration 1710/ 2891 | consumed samples: 437760 | consumed tokens: 896532480 | elapsed time per iteration (s): 1.26 | learning rate: 8.565E-05 | global batch size: 256 | lm loss: 3.513585E+00 | grad norm: 0.307 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.689 | TFLOPs: 49.05 | 7: iteration 1720/ 2891 | consumed samples: 440320 | consumed tokens: 901775360 | elapsed time per iteration (s): 1.28 | learning rate: 8.470E-05 | global batch size: 256 | lm loss: 3.536938E+00 | grad norm: 0.302 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.841 | TFLOPs: 48.36 | 7: iteration 1730/ 2891 | consumed samples: 442880 | consumed tokens: 907018240 | elapsed time per iteration (s): 1.27 | learning rate: 8.375E-05 | global batch size: 256 | lm loss: 3.513513E+00 | grad norm: 0.324 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.051 | TFLOPs: 48.89 | 7: iteration 1740/ 2891 | consumed samples: 445440 | consumed tokens: 912261120 | elapsed time per iteration (s): 1.27 | learning rate: 8.281E-05 | global batch size: 256 | lm loss: 3.525639E+00 | grad norm: 0.280 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.909 | TFLOPs: 48.86 | 7: iteration 1750/ 2891 | consumed samples: 448000 | consumed tokens: 917504000 | elapsed time per iteration (s): 1.26 | learning rate: 8.187E-05 | global batch size: 256 | lm loss: 3.530594E+00 | grad norm: 0.315 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.688 | TFLOPs: 49.05 | 7: iteration 1760/ 2891 | consumed samples: 450560 | consumed tokens: 922746880 | elapsed time per iteration (s): 1.27 | learning rate: 8.093E-05 | global batch size: 256 | lm loss: 3.487004E+00 | grad norm: 0.316 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.088 | TFLOPs: 48.90 | 7: iteration 1770/ 2891 | consumed samples: 453120 | consumed tokens: 927989760 | elapsed time per iteration (s): 1.27 | learning rate: 8.000E-05 | global batch size: 256 | lm loss: 3.507674E+00 | grad norm: 0.290 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.355 | TFLOPs: 48.97 | 7: iteration 1780/ 2891 | consumed samples: 455680 | consumed tokens: 933232640 | elapsed time per iteration (s): 1.28 | learning rate: 7.907E-05 | global batch size: 256 | lm loss: 3.508082E+00 | grad norm: 0.340 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.349 | TFLOPs: 48.24 | 7: iteration 1790/ 2891 | consumed samples: 458240 | consumed tokens: 938475520 | elapsed time per iteration (s): 1.26 | learning rate: 7.814E-05 | global batch size: 256 | lm loss: 3.507079E+00 | grad norm: 0.301 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.628 | TFLOPs: 49.03 | 7: iteration 1800/ 2891 | consumed samples: 460800 | consumed tokens: 943718400 | elapsed time per iteration (s): 1.28 | learning rate: 7.722E-05 | global batch size: 256 | lm loss: 3.517303E+00 | grad norm: 0.560 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.505 | TFLOPs: 48.52 | 7: iteration 1810/ 2891 | consumed samples: 463360 | consumed tokens: 948961280 | elapsed time per iteration (s): 1.26 | learning rate: 7.630E-05 | global batch size: 256 | lm loss: 3.495737E+00 | grad norm: 0.370 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.627 | TFLOPs: 49.03 | 7: iteration 1820/ 2891 | consumed samples: 465920 | consumed tokens: 954204160 | elapsed time per iteration (s): 1.26 | learning rate: 7.539E-05 | global batch size: 256 | lm loss: 3.479797E+00 | grad norm: 0.338 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.540 | TFLOPs: 49.01 | 7: iteration 1830/ 2891 | consumed samples: 468480 | consumed tokens: 959447040 | elapsed time per iteration (s): 1.26 | learning rate: 7.448E-05 | global batch size: 256 | lm loss: 3.524964E+00 | grad norm: 0.283 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.529 | TFLOPs: 49.01 | 7: iteration 1840/ 2891 | consumed samples: 471040 | consumed tokens: 964689920 | elapsed time per iteration (s): 1.29 | learning rate: 7.357E-05 | global batch size: 256 | lm loss: 3.498029E+00 | grad norm: 0.280 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.864 | TFLOPs: 47.88 | 7: iteration 1850/ 2891 | consumed samples: 473600 | consumed tokens: 969932800 | elapsed time per iteration (s): 1.26 | learning rate: 7.267E-05 | global batch size: 256 | lm loss: 3.489665E+00 | grad norm: 0.320 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.495 | TFLOPs: 49.00 | 7: iteration 1860/ 2891 | consumed samples: 476160 | consumed tokens: 975175680 | elapsed time per iteration (s): 1.26 | learning rate: 7.178E-05 | global batch size: 256 | lm loss: 3.487114E+00 | grad norm: 0.271 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.560 | TFLOPs: 49.02 | 7: iteration 1870/ 2891 | consumed samples: 478720 | consumed tokens: 980418560 | elapsed time per iteration (s): 1.27 | learning rate: 7.088E-05 | global batch size: 256 | lm loss: 3.447561E+00 | grad norm: 0.294 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.912 | TFLOPs: 48.62 | 7: iteration 1880/ 2891 | consumed samples: 481280 | consumed tokens: 985661440 | elapsed time per iteration (s): 1.26 | learning rate: 7.000E-05 | global batch size: 256 | lm loss: 3.454849E+00 | grad norm: 0.296 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.615 | TFLOPs: 49.03 | 7: iteration 1890/ 2891 | consumed samples: 483840 | consumed tokens: 990904320 | elapsed time per iteration (s): 1.26 | learning rate: 6.912E-05 | global batch size: 256 | lm loss: 3.477499E+00 | grad norm: 0.313 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.576 | TFLOPs: 49.02 | 7: iteration 1900/ 2891 | consumed samples: 486400 | consumed tokens: 996147200 | elapsed time per iteration (s): 1.26 | learning rate: 6.824E-05 | global batch size: 256 | lm loss: 3.466372E+00 | grad norm: 0.271 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.601 | TFLOPs: 49.03 | 7: iteration 1910/ 2891 | consumed samples: 488960 | consumed tokens: 1001390080 | elapsed time per iteration (s): 1.26 | learning rate: 6.737E-05 | global batch size: 256 | lm loss: 3.483150E+00 | grad norm: 0.312 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.612 | TFLOPs: 49.03 | 7: iteration 1920/ 2891 | consumed samples: 491520 | consumed tokens: 1006632960 | elapsed time per iteration (s): 1.26 | learning rate: 6.650E-05 | global batch size: 256 | lm loss: 3.441427E+00 | grad norm: 0.287 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.549 | TFLOPs: 49.01 | 7: iteration 1930/ 2891 | consumed samples: 494080 | consumed tokens: 1011875840 | elapsed time per iteration (s): 1.26 | learning rate: 6.564E-05 | global batch size: 256 | lm loss: 3.454887E+00 | grad norm: 0.350 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.612 | TFLOPs: 49.03 | 7: iteration 1940/ 2891 | consumed samples: 496640 | consumed tokens: 1017118720 | elapsed time per iteration (s): 1.26 | learning rate: 6.478E-05 | global batch size: 256 | lm loss: 3.479169E+00 | grad norm: 0.319 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.663 | TFLOPs: 49.04 | 7: iteration 1950/ 2891 | consumed samples: 499200 | consumed tokens: 1022361600 | elapsed time per iteration (s): 1.26 | learning rate: 6.393E-05 | global batch size: 256 | lm loss: 3.480439E+00 | grad norm: 0.353 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.621 | TFLOPs: 49.03 | 7: iteration 1960/ 2891 | consumed samples: 501760 | consumed tokens: 1027604480 | elapsed time per iteration (s): 1.28 | learning rate: 6.308E-05 | global batch size: 256 | lm loss: 3.474596E+00 | grad norm: 0.286 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.492 | TFLOPs: 48.52 | 7: iteration 1970/ 2891 | consumed samples: 504320 | consumed tokens: 1032847360 | elapsed time per iteration (s): 1.26 | learning rate: 6.224E-05 | global batch size: 256 | lm loss: 3.474303E+00 | grad norm: 0.278 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.641 | TFLOPs: 49.04 | 7: iteration 1980/ 2891 | consumed samples: 506880 | consumed tokens: 1038090240 | elapsed time per iteration (s): 1.27 | learning rate: 6.141E-05 | global batch size: 256 | lm loss: 3.454424E+00 | grad norm: 0.290 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.404 | TFLOPs: 48.74 | 7: iteration 1990/ 2891 | consumed samples: 509440 | consumed tokens: 1043333120 | elapsed time per iteration (s): 1.26 | learning rate: 6.058E-05 | global batch size: 256 | lm loss: 3.442164E+00 | grad norm: 0.387 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.639 | TFLOPs: 49.04 | 0: [2022-11-29 18:35:21,641] [INFO] [logging.py:68:log_dist] [Rank 0] step=2000, skipped=0, lr=[5.9757828883278194e-05, 5.9757828883278194e-05, 5.9757828883278194e-05], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 2000/ 2891 | consumed samples: 512000 | consumed tokens: 1048576000 | elapsed time per iteration (s): 1.26 | learning rate: 5.976E-05 | global batch size: 256 | lm loss: 3.447073E+00 | grad norm: 0.329 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.649 | TFLOPs: 49.04 | 0: steps: 2000 loss: 3.4402 iter time (s): 1.285 samples/sec: 199.288 7: ------------------------------------------------------------------------------------------ 7: valid loss at iteration 2000 | lm loss value: 3.345912E+00 | lm loss PPL: 2.838647E+01 | 7: ------------------------------------------------------------------------------------------ 0: saving checkpoint at iteration 2000 to checkpoints_1b1oscar 0: [2022-11-29 18:35:22,064] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step2000 is begin to save! 0: [2022-11-29 18:35:22,067] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_01-model_00-model_states.pt... 0: [2022-11-29 18:35:22,264] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_01-model_00-model_states.pt. 0: [2022-11-29 18:35:22,265] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_03-model_00-model_states.pt... 0: [2022-11-29 18:35:22,348] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_03-model_00-model_states.pt. 0: [2022-11-29 18:35:22,348] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_04-model_00-model_states.pt... 0: [2022-11-29 18:35:22,425] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_04-model_00-model_states.pt. 0: [2022-11-29 18:35:22,426] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_05-model_00-model_states.pt... 0: [2022-11-29 18:35:22,502] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_05-model_00-model_states.pt. 0: [2022-11-29 18:35:22,502] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_06-model_00-model_states.pt... 0: [2022-11-29 18:35:22,576] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_06-model_00-model_states.pt. 0: [2022-11-29 18:35:22,577] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_07-model_00-model_states.pt... 0: [2022-11-29 18:35:22,650] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_07-model_00-model_states.pt. 0: [2022-11-29 18:35:22,650] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_08-model_00-model_states.pt... 0: [2022-11-29 18:35:22,721] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_08-model_00-model_states.pt. 0: [2022-11-29 18:35:22,721] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_09-model_00-model_states.pt... 0: [2022-11-29 18:35:22,797] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_09-model_00-model_states.pt. 0: [2022-11-29 18:35:22,797] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_10-model_00-model_states.pt... 0: [2022-11-29 18:35:22,871] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_10-model_00-model_states.pt. 0: [2022-11-29 18:35:22,872] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_11-model_00-model_states.pt... 0: [2022-11-29 18:35:22,947] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_11-model_00-model_states.pt. 0: [2022-11-29 18:35:22,948] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_12-model_00-model_states.pt... 0: [2022-11-29 18:35:23,022] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_12-model_00-model_states.pt. 0: [2022-11-29 18:35:23,023] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_13-model_00-model_states.pt... 0: [2022-11-29 18:35:23,099] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_13-model_00-model_states.pt. 0: [2022-11-29 18:35:23,099] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_14-model_00-model_states.pt... 0: [2022-11-29 18:35:23,224] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_14-model_00-model_states.pt. 0: [2022-11-29 18:35:23,224] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_15-model_00-model_states.pt... 0: [2022-11-29 18:35:23,301] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_15-model_00-model_states.pt. 0: [2022-11-29 18:35:23,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_16-model_00-model_states.pt... 0: [2022-11-29 18:35:23,376] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_16-model_00-model_states.pt. 0: [2022-11-29 18:35:23,376] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_17-model_00-model_states.pt... 0: [2022-11-29 18:35:23,451] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_17-model_00-model_states.pt. 0: [2022-11-29 18:35:23,451] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_18-model_00-model_states.pt... 0: [2022-11-29 18:35:23,525] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_18-model_00-model_states.pt. 0: [2022-11-29 18:35:23,526] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_19-model_00-model_states.pt... 0: [2022-11-29 18:35:23,598] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_19-model_00-model_states.pt. 0: [2022-11-29 18:35:23,599] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_20-model_00-model_states.pt... 0: [2022-11-29 18:35:23,676] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_20-model_00-model_states.pt. 0: [2022-11-29 18:35:23,677] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_21-model_00-model_states.pt... 0: [2022-11-29 18:35:23,752] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_21-model_00-model_states.pt. 0: [2022-11-29 18:35:23,753] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_22-model_00-model_states.pt... 0: [2022-11-29 18:35:23,825] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_22-model_00-model_states.pt. 0: [2022-11-29 18:35:23,826] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_23-model_00-model_states.pt... 0: [2022-11-29 18:35:23,903] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_23-model_00-model_states.pt. 0: [2022-11-29 18:35:23,903] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_24-model_00-model_states.pt... 0: [2022-11-29 18:35:23,977] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_24-model_00-model_states.pt. 0: [2022-11-29 18:35:23,978] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_25-model_00-model_states.pt... 0: [2022-11-29 18:35:24,050] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_25-model_00-model_states.pt. 0: [2022-11-29 18:35:24,050] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_26-model_00-model_states.pt... 0: [2022-11-29 18:35:24,124] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_26-model_00-model_states.pt. 0: [2022-11-29 18:35:24,124] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_27-model_00-model_states.pt... 0: [2022-11-29 18:35:24,199] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_27-model_00-model_states.pt. 0: [2022-11-29 18:35:24,199] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_28-model_00-model_states.pt... 0: [2022-11-29 18:35:24,274] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_28-model_00-model_states.pt. 0: [2022-11-29 18:35:24,274] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/layer_30-model_00-model_states.pt... 0: [2022-11-29 18:35:24,277] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/layer_30-model_00-model_states.pt. 0: [2022-11-29 18:35:24,278] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: checkpoints_1b1oscar/global_step2000/mp_rank_00_model_states.pt 0: [2022-11-29 18:35:24,279] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/mp_rank_00_model_states.pt... 0: [2022-11-29 18:35:24,281] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/mp_rank_00_model_states.pt. 0: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:35:24,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:35:24,535] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:35:24,542] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:35:24,542] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt 4: [2022-11-29 18:35:24,542] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 3: [2022-11-29 18:35:24,542] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt. 3: [2022-11-29 18:35:24,542] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt 3: [2022-11-29 18:35:24,542] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 2: [2022-11-29 18:35:24,546] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt. 2: [2022-11-29 18:35:24,546] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt 2: [2022-11-29 18:35:24,546] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 7: [2022-11-29 18:35:24,549] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:35:24,549] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt 7: [2022-11-29 18:35:24,549] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 1: [2022-11-29 18:35:24,549] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:35:24,549] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt 1: [2022-11-29 18:35:24,549] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 0: [2022-11-29 18:35:24,553] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:35:24,553] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt 0: [2022-11-29 18:35:24,553] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 7: [2022-11-29 18:35:24,554] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:35:24,554] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt 7: [2022-11-29 18:35:24,554] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 2: [2022-11-29 18:35:24,555] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt. 2: [2022-11-29 18:35:24,555] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt 2: [2022-11-29 18:35:24,555] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 1: [2022-11-29 18:35:24,556] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:35:24,556] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt 1: [2022-11-29 18:35:24,556] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 5: [2022-11-29 18:35:24,556] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:35:24,556] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt 5: [2022-11-29 18:35:24,557] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 7: [2022-11-29 18:35:24,557] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:35:24,557] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt 7: [2022-11-29 18:35:24,557] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 1: [2022-11-29 18:35:24,557] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:35:24,557] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt 1: [2022-11-29 18:35:24,557] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 6: [2022-11-29 18:35:24,558] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:35:24,559] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt 6: [2022-11-29 18:35:24,559] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 2: [2022-11-29 18:35:24,559] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt. 2: [2022-11-29 18:35:24,559] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt 2: [2022-11-29 18:35:24,559] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 4: [2022-11-29 18:35:24,560] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:35:24,560] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt 4: [2022-11-29 18:35:24,560] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 1: [2022-11-29 18:35:24,560] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:35:24,560] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt 1: [2022-11-29 18:35:24,560] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 3: [2022-11-29 18:35:24,561] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt. 3: [2022-11-29 18:35:24,561] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt 3: [2022-11-29 18:35:24,561] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 5: [2022-11-29 18:35:24,561] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:35:24,562] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt 5: [2022-11-29 18:35:24,562] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 0: [2022-11-29 18:35:24,562] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:35:24,562] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt 0: [2022-11-29 18:35:24,562] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 2: [2022-11-29 18:35:24,562] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:35:24,563] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:35:24,563] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt 0: [2022-11-29 18:35:24,563] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 0: [2022-11-29 18:35:24,564] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:35:24,564] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt 0: [2022-11-29 18:35:24,564] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 5: [2022-11-29 18:35:24,564] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:35:24,564] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt 5: [2022-11-29 18:35:24,564] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 3: [2022-11-29 18:35:24,564] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:35:24,566] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:35:24,566] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt 7: [2022-11-29 18:35:24,566] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 5: [2022-11-29 18:35:24,566] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:35:24,566] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt 5: [2022-11-29 18:35:24,566] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 5: [2022-11-29 18:35:24,567] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:35:24,567] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt 5: [2022-11-29 18:35:24,567] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 0: [2022-11-29 18:35:24,568] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:35:24,569] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt 0: [2022-11-29 18:35:24,569] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 7: [2022-11-29 18:35:24,569] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:35:24,569] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt 7: [2022-11-29 18:35:24,569] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 0: [2022-11-29 18:35:24,570] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:35:24,570] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt 0: [2022-11-29 18:35:24,570] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 1: [2022-11-29 18:35:24,573] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:35:24,573] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt 1: [2022-11-29 18:35:24,573] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 1: [2022-11-29 18:35:24,573] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:35:24,573] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt 1: [2022-11-29 18:35:24,573] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:35:24,573] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 1: [2022-11-29 18:35:24,573] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt 1: [2022-11-29 18:35:24,573] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 3: [2022-11-29 18:35:24,564] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt 2: [2022-11-29 18:35:24,562] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt 3: [2022-11-29 18:35:24,565] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 2: [2022-11-29 18:35:24,562] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 3: [2022-11-29 18:35:24,567] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt. 2: [2022-11-29 18:35:24,567] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt. 3: [2022-11-29 18:35:24,567] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt 2: [2022-11-29 18:35:24,567] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt 3: [2022-11-29 18:35:24,567] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 2: [2022-11-29 18:35:24,567] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 2: [2022-11-29 18:35:24,567] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt. 2: [2022-11-29 18:35:24,567] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt 2: [2022-11-29 18:35:24,567] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 2: [2022-11-29 18:35:24,569] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt. 2: [2022-11-29 18:35:24,569] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt 2: [2022-11-29 18:35:24,569] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 4: [2022-11-29 18:35:24,577] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:35:24,577] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:35:24,577] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt 4: [2022-11-29 18:35:24,577] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt 4: [2022-11-29 18:35:24,577] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 4: [2022-11-29 18:35:24,577] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 5: [2022-11-29 18:35:24,577] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:35:24,578] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt 5: [2022-11-29 18:35:24,578] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 0: [2022-11-29 18:35:24,578] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:35:24,578] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt 0: [2022-11-29 18:35:24,578] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 5: [2022-11-29 18:35:24,579] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:35:24,579] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:35:24,579] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt 5: [2022-11-29 18:35:24,579] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 5: [2022-11-29 18:35:24,579] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt 6: [2022-11-29 18:35:24,579] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:35:24,579] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 6: [2022-11-29 18:35:24,579] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt 6: [2022-11-29 18:35:24,579] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 6: [2022-11-29 18:35:24,579] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:35:24,579] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt 6: [2022-11-29 18:35:24,579] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 6: [2022-11-29 18:35:24,579] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:35:24,580] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt 6: [2022-11-29 18:35:24,580] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 6: [2022-11-29 18:35:24,580] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:35:24,580] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt 6: [2022-11-29 18:35:24,580] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 4: [2022-11-29 18:35:24,580] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:35:24,580] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt 4: [2022-11-29 18:35:24,580] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 4: [2022-11-29 18:35:24,580] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:35:24,580] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt 4: [2022-11-29 18:35:24,580] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 7: [2022-11-29 18:35:24,588] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:35:24,588] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt 7: [2022-11-29 18:35:24,588] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 7: [2022-11-29 18:35:24,588] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:35:24,588] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt 7: [2022-11-29 18:35:24,588] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 3: [2022-11-29 18:35:24,597] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt. 3: [2022-11-29 18:35:24,597] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt 3: [2022-11-29 18:35:24,597] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 3: [2022-11-29 18:35:24,597] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt. 3: [2022-11-29 18:35:24,597] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt 3: [2022-11-29 18:35:24,597] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 3: [2022-11-29 18:35:24,602] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:35:24,603] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:35:24,603] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt 7: [2022-11-29 18:35:24,603] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 4: [2022-11-29 18:35:24,605] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:35:24,605] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt 4: [2022-11-29 18:35:24,605] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 6: [2022-11-29 18:35:24,606] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:35:24,606] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:35:24,606] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:35:24,606] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt 6: [2022-11-29 18:35:24,606] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt 6: [2022-11-29 18:35:24,606] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt 6: [2022-11-29 18:35:24,606] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 6: [2022-11-29 18:35:24,606] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 6: [2022-11-29 18:35:24,606] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 3: [2022-11-29 18:35:24,602] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt 3: [2022-11-29 18:35:24,603] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 3: [2022-11-29 18:35:24,603] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt. 3: [2022-11-29 18:35:24,603] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt 3: [2022-11-29 18:35:24,603] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 4: [2022-11-29 18:35:24,609] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:35:24,609] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt 4: [2022-11-29 18:35:24,609] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 2: [2022-11-29 18:35:24,611] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt. 2: [2022-11-29 18:35:24,611] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt 2: [2022-11-29 18:35:24,611] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 1: [2022-11-29 18:35:24,629] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:35:24,629] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt 1: [2022-11-29 18:35:24,629] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 0: [2022-11-29 18:35:24,808] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt 0: [2022-11-29 18:35:24,808] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 0: successfully saved checkpoint at iteration 2000 to checkpoints_1b1oscar 7: time (ms) | save-checkpoint: 2769.97 7: iteration 2010/ 2891 | consumed samples: 514560 | consumed tokens: 1053818880 | elapsed time per iteration (s): 1.60 | learning rate: 5.894E-05 | global batch size: 256 | lm loss: 3.431136E+00 | grad norm: 0.272 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 159.674 | TFLOPs: 38.64 | 7: iteration 2020/ 2891 | consumed samples: 517120 | consumed tokens: 1059061760 | elapsed time per iteration (s): 1.28 | learning rate: 5.813E-05 | global batch size: 256 | lm loss: 3.434014E+00 | grad norm: 0.359 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.733 | TFLOPs: 48.58 | 7: iteration 2030/ 2891 | consumed samples: 519680 | consumed tokens: 1064304640 | elapsed time per iteration (s): 1.27 | learning rate: 5.733E-05 | global batch size: 256 | lm loss: 3.448711E+00 | grad norm: 0.314 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.867 | TFLOPs: 48.85 | 7: iteration 2040/ 2891 | consumed samples: 522240 | consumed tokens: 1069547520 | elapsed time per iteration (s): 1.26 | learning rate: 5.653E-05 | global batch size: 256 | lm loss: 3.415279E+00 | grad norm: 0.262 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.592 | TFLOPs: 49.03 | 7: iteration 2050/ 2891 | consumed samples: 524800 | consumed tokens: 1074790400 | elapsed time per iteration (s): 1.26 | learning rate: 5.574E-05 | global batch size: 256 | lm loss: 3.419112E+00 | grad norm: 0.282 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.570 | TFLOPs: 49.02 | 7: iteration 2060/ 2891 | consumed samples: 527360 | consumed tokens: 1080033280 | elapsed time per iteration (s): 1.27 | learning rate: 5.495E-05 | global batch size: 256 | lm loss: 3.421627E+00 | grad norm: 0.309 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.406 | TFLOPs: 48.74 | 7: iteration 2070/ 2891 | consumed samples: 529920 | consumed tokens: 1085276160 | elapsed time per iteration (s): 1.27 | learning rate: 5.418E-05 | global batch size: 256 | lm loss: 3.406310E+00 | grad norm: 0.336 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.987 | TFLOPs: 48.88 | 7: iteration 2080/ 2891 | consumed samples: 532480 | consumed tokens: 1090519040 | elapsed time per iteration (s): 1.26 | learning rate: 5.340E-05 | global batch size: 256 | lm loss: 3.414038E+00 | grad norm: 0.288 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.486 | TFLOPs: 49.00 | 7: iteration 2090/ 2891 | consumed samples: 535040 | consumed tokens: 1095761920 | elapsed time per iteration (s): 1.27 | learning rate: 5.264E-05 | global batch size: 256 | lm loss: 3.394331E+00 | grad norm: 0.290 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.650 | TFLOPs: 48.80 | 7: iteration 2100/ 2891 | consumed samples: 537600 | consumed tokens: 1101004800 | elapsed time per iteration (s): 1.26 | learning rate: 5.188E-05 | global batch size: 256 | lm loss: 3.416785E+00 | grad norm: 0.274 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.593 | TFLOPs: 49.03 | 7: iteration 2110/ 2891 | consumed samples: 540160 | consumed tokens: 1106247680 | elapsed time per iteration (s): 1.26 | learning rate: 5.113E-05 | global batch size: 256 | lm loss: 3.430867E+00 | grad norm: 0.289 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.427 | TFLOPs: 48.99 | 7: iteration 2120/ 2891 | consumed samples: 542720 | consumed tokens: 1111490560 | elapsed time per iteration (s): 1.26 | learning rate: 5.039E-05 | global batch size: 256 | lm loss: 3.407412E+00 | grad norm: 0.287 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.602 | TFLOPs: 49.03 | 7: iteration 2130/ 2891 | consumed samples: 545280 | consumed tokens: 1116733440 | elapsed time per iteration (s): 1.26 | learning rate: 4.965E-05 | global batch size: 256 | lm loss: 3.400624E+00 | grad norm: 0.297 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.498 | TFLOPs: 49.00 | 7: iteration 2140/ 2891 | consumed samples: 547840 | consumed tokens: 1121976320 | elapsed time per iteration (s): 1.26 | learning rate: 4.892E-05 | global batch size: 256 | lm loss: 3.414149E+00 | grad norm: 0.283 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.562 | TFLOPs: 49.02 | 7: iteration 2150/ 2891 | consumed samples: 550400 | consumed tokens: 1127219200 | elapsed time per iteration (s): 1.26 | learning rate: 4.820E-05 | global batch size: 256 | lm loss: 3.407611E+00 | grad norm: 0.280 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.557 | TFLOPs: 49.02 | 7: iteration 2160/ 2891 | consumed samples: 552960 | consumed tokens: 1132462080 | elapsed time per iteration (s): 1.26 | learning rate: 4.749E-05 | global batch size: 256 | lm loss: 3.430661E+00 | grad norm: 0.267 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.517 | TFLOPs: 49.01 | 7: iteration 2170/ 2891 | consumed samples: 555520 | consumed tokens: 1137704960 | elapsed time per iteration (s): 1.26 | learning rate: 4.678E-05 | global batch size: 256 | lm loss: 3.402907E+00 | grad norm: 0.270 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.535 | TFLOPs: 49.01 | 7: iteration 2180/ 2891 | consumed samples: 558080 | consumed tokens: 1142947840 | elapsed time per iteration (s): 1.26 | learning rate: 4.608E-05 | global batch size: 256 | lm loss: 3.398573E+00 | grad norm: 0.291 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.468 | TFLOPs: 49.00 | 7: iteration 2190/ 2891 | consumed samples: 560640 | consumed tokens: 1148190720 | elapsed time per iteration (s): 1.27 | learning rate: 4.539E-05 | global batch size: 256 | lm loss: 3.400523E+00 | grad norm: 0.288 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.272 | TFLOPs: 48.71 | 7: iteration 2200/ 2891 | consumed samples: 563200 | consumed tokens: 1153433600 | elapsed time per iteration (s): 1.26 | learning rate: 4.471E-05 | global batch size: 256 | lm loss: 3.393715E+00 | grad norm: 0.286 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.607 | TFLOPs: 49.03 | 7: iteration 2210/ 2891 | consumed samples: 565760 | consumed tokens: 1158676480 | elapsed time per iteration (s): 1.26 | learning rate: 4.403E-05 | global batch size: 256 | lm loss: 3.393682E+00 | grad norm: 0.278 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.543 | TFLOPs: 49.01 | 7: iteration 2220/ 2891 | consumed samples: 568320 | consumed tokens: 1163919360 | elapsed time per iteration (s): 1.26 | learning rate: 4.336E-05 | global batch size: 256 | lm loss: 3.397488E+00 | grad norm: 0.287 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.628 | TFLOPs: 49.03 | 7: iteration 2230/ 2891 | consumed samples: 570880 | consumed tokens: 1169162240 | elapsed time per iteration (s): 1.26 | learning rate: 4.270E-05 | global batch size: 256 | lm loss: 3.383004E+00 | grad norm: 0.311 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.534 | TFLOPs: 49.01 | 7: iteration 2240/ 2891 | consumed samples: 573440 | consumed tokens: 1174405120 | elapsed time per iteration (s): 1.26 | learning rate: 4.205E-05 | global batch size: 256 | lm loss: 3.405868E+00 | grad norm: 0.262 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.559 | TFLOPs: 49.02 | 7: iteration 2250/ 2891 | consumed samples: 576000 | consumed tokens: 1179648000 | elapsed time per iteration (s): 1.26 | learning rate: 4.141E-05 | global batch size: 256 | lm loss: 3.392905E+00 | grad norm: 0.259 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.391 | TFLOPs: 48.98 | 7: iteration 2260/ 2891 | consumed samples: 578560 | consumed tokens: 1184890880 | elapsed time per iteration (s): 1.26 | learning rate: 4.077E-05 | global batch size: 256 | lm loss: 3.378339E+00 | grad norm: 0.292 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.661 | TFLOPs: 49.04 | 7: iteration 2270/ 2891 | consumed samples: 581120 | consumed tokens: 1190133760 | elapsed time per iteration (s): 1.26 | learning rate: 4.014E-05 | global batch size: 256 | lm loss: 3.384299E+00 | grad norm: 0.266 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.470 | TFLOPs: 49.00 | 7: iteration 2280/ 2891 | consumed samples: 583680 | consumed tokens: 1195376640 | elapsed time per iteration (s): 1.26 | learning rate: 3.953E-05 | global batch size: 256 | lm loss: 3.391397E+00 | grad norm: 0.298 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.548 | TFLOPs: 49.01 | 7: iteration 2290/ 2891 | consumed samples: 586240 | consumed tokens: 1200619520 | elapsed time per iteration (s): 1.26 | learning rate: 3.892E-05 | global batch size: 256 | lm loss: 3.399052E+00 | grad norm: 0.263 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.479 | TFLOPs: 49.00 | 7: iteration 2300/ 2891 | consumed samples: 588800 | consumed tokens: 1205862400 | elapsed time per iteration (s): 1.26 | learning rate: 3.831E-05 | global batch size: 256 | lm loss: 3.367217E+00 | grad norm: 0.270 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.587 | TFLOPs: 49.02 | 7: iteration 2310/ 2891 | consumed samples: 591360 | consumed tokens: 1211105280 | elapsed time per iteration (s): 1.28 | learning rate: 3.772E-05 | global batch size: 256 | lm loss: 3.377293E+00 | grad norm: 0.285 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.047 | TFLOPs: 48.41 | 7: iteration 2320/ 2891 | consumed samples: 593920 | consumed tokens: 1216348160 | elapsed time per iteration (s): 1.26 | learning rate: 3.714E-05 | global batch size: 256 | lm loss: 3.359563E+00 | grad norm: 0.270 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.614 | TFLOPs: 49.03 | 7: iteration 2330/ 2891 | consumed samples: 596480 | consumed tokens: 1221591040 | elapsed time per iteration (s): 1.26 | learning rate: 3.656E-05 | global batch size: 256 | lm loss: 3.386751E+00 | grad norm: 0.289 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.408 | TFLOPs: 48.98 | 7: iteration 2340/ 2891 | consumed samples: 599040 | consumed tokens: 1226833920 | elapsed time per iteration (s): 1.26 | learning rate: 3.600E-05 | global batch size: 256 | lm loss: 3.375873E+00 | grad norm: 0.315 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.440 | TFLOPs: 48.99 | 7: iteration 2350/ 2891 | consumed samples: 601600 | consumed tokens: 1232076800 | elapsed time per iteration (s): 1.29 | learning rate: 3.544E-05 | global batch size: 256 | lm loss: 3.389051E+00 | grad norm: 0.262 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.399 | TFLOPs: 48.01 | 7: iteration 2360/ 2891 | consumed samples: 604160 | consumed tokens: 1237319680 | elapsed time per iteration (s): 1.26 | learning rate: 3.489E-05 | global batch size: 256 | lm loss: 3.352780E+00 | grad norm: 0.284 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.488 | TFLOPs: 49.00 | 7: iteration 2370/ 2891 | consumed samples: 606720 | consumed tokens: 1242562560 | elapsed time per iteration (s): 1.26 | learning rate: 3.435E-05 | global batch size: 256 | lm loss: 3.333027E+00 | grad norm: 0.293 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.533 | TFLOPs: 49.01 | 7: iteration 2380/ 2891 | consumed samples: 609280 | consumed tokens: 1247805440 | elapsed time per iteration (s): 1.26 | learning rate: 3.382E-05 | global batch size: 256 | lm loss: 3.339282E+00 | grad norm: 0.288 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.640 | TFLOPs: 49.04 | 7: iteration 2390/ 2891 | consumed samples: 611840 | consumed tokens: 1253048320 | elapsed time per iteration (s): 1.27 | learning rate: 3.330E-05 | global batch size: 256 | lm loss: 3.383463E+00 | grad norm: 0.478 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.827 | TFLOPs: 48.84 | 7: iteration 2400/ 2891 | consumed samples: 614400 | consumed tokens: 1258291200 | elapsed time per iteration (s): 1.26 | learning rate: 3.279E-05 | global batch size: 256 | lm loss: 3.343145E+00 | grad norm: 0.263 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.548 | TFLOPs: 49.01 | 7: iteration 2410/ 2891 | consumed samples: 616960 | consumed tokens: 1263534080 | elapsed time per iteration (s): 1.26 | learning rate: 3.228E-05 | global batch size: 256 | lm loss: 3.370723E+00 | grad norm: 0.271 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.593 | TFLOPs: 49.03 | 7: iteration 2420/ 2891 | consumed samples: 619520 | consumed tokens: 1268776960 | elapsed time per iteration (s): 1.26 | learning rate: 3.179E-05 | global batch size: 256 | lm loss: 3.345803E+00 | grad norm: 0.310 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.705 | TFLOPs: 49.05 | 7: iteration 2430/ 2891 | consumed samples: 622080 | consumed tokens: 1274019840 | elapsed time per iteration (s): 1.26 | learning rate: 3.131E-05 | global batch size: 256 | lm loss: 3.370715E+00 | grad norm: 0.288 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.531 | TFLOPs: 49.01 | 7: iteration 2440/ 2891 | consumed samples: 624640 | consumed tokens: 1279262720 | elapsed time per iteration (s): 1.28 | learning rate: 3.083E-05 | global batch size: 256 | lm loss: 3.354684E+00 | grad norm: 0.271 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.854 | TFLOPs: 48.36 | 7: iteration 2450/ 2891 | consumed samples: 627200 | consumed tokens: 1284505600 | elapsed time per iteration (s): 1.26 | learning rate: 3.037E-05 | global batch size: 256 | lm loss: 3.366651E+00 | grad norm: 0.260 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.526 | TFLOPs: 49.01 | 7: iteration 2460/ 2891 | consumed samples: 629760 | consumed tokens: 1289748480 | elapsed time per iteration (s): 1.26 | learning rate: 2.991E-05 | global batch size: 256 | lm loss: 3.340447E+00 | grad norm: 0.272 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.638 | TFLOPs: 49.04 | 7: iteration 2470/ 2891 | consumed samples: 632320 | consumed tokens: 1294991360 | elapsed time per iteration (s): 1.26 | learning rate: 2.947E-05 | global batch size: 256 | lm loss: 3.362162E+00 | grad norm: 0.278 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.650 | TFLOPs: 49.04 | 7: iteration 2480/ 2891 | consumed samples: 634880 | consumed tokens: 1300234240 | elapsed time per iteration (s): 1.26 | learning rate: 2.903E-05 | global batch size: 256 | lm loss: 3.364618E+00 | grad norm: 0.266 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.584 | TFLOPs: 49.02 | 7: iteration 2490/ 2891 | consumed samples: 637440 | consumed tokens: 1305477120 | elapsed time per iteration (s): 1.26 | learning rate: 2.860E-05 | global batch size: 256 | lm loss: 3.355886E+00 | grad norm: 0.270 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.572 | TFLOPs: 49.02 | 7: iteration 2500/ 2891 | consumed samples: 640000 | consumed tokens: 1310720000 | elapsed time per iteration (s): 1.26 | learning rate: 2.819E-05 | global batch size: 256 | lm loss: 3.377243E+00 | grad norm: 0.272 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.575 | TFLOPs: 49.02 | 7: iteration 2510/ 2891 | consumed samples: 642560 | consumed tokens: 1315962880 | elapsed time per iteration (s): 1.26 | learning rate: 2.778E-05 | global batch size: 256 | lm loss: 3.365576E+00 | grad norm: 0.267 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.574 | TFLOPs: 49.02 | 7: iteration 2520/ 2891 | consumed samples: 645120 | consumed tokens: 1321205760 | elapsed time per iteration (s): 1.26 | learning rate: 2.738E-05 | global batch size: 256 | lm loss: 3.353968E+00 | grad norm: 0.281 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.480 | TFLOPs: 49.00 | 7: iteration 2530/ 2891 | consumed samples: 647680 | consumed tokens: 1326448640 | elapsed time per iteration (s): 1.26 | learning rate: 2.700E-05 | global batch size: 256 | lm loss: 3.347538E+00 | grad norm: 0.285 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.701 | TFLOPs: 49.05 | 7: iteration 2540/ 2891 | consumed samples: 650240 | consumed tokens: 1331691520 | elapsed time per iteration (s): 1.26 | learning rate: 2.662E-05 | global batch size: 256 | lm loss: 3.346878E+00 | grad norm: 0.279 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.566 | TFLOPs: 49.02 | 7: iteration 2550/ 2891 | consumed samples: 652800 | consumed tokens: 1336934400 | elapsed time per iteration (s): 1.28 | learning rate: 2.625E-05 | global batch size: 256 | lm loss: 3.349545E+00 | grad norm: 0.270 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.673 | TFLOPs: 48.32 | 7: iteration 2560/ 2891 | consumed samples: 655360 | consumed tokens: 1342177280 | elapsed time per iteration (s): 1.26 | learning rate: 2.590E-05 | global batch size: 256 | lm loss: 3.347480E+00 | grad norm: 0.283 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.527 | TFLOPs: 49.01 | 7: iteration 2570/ 2891 | consumed samples: 657920 | consumed tokens: 1347420160 | elapsed time per iteration (s): 1.26 | learning rate: 2.555E-05 | global batch size: 256 | lm loss: 3.346072E+00 | grad norm: 0.274 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.548 | TFLOPs: 49.01 | 7: iteration 2580/ 2891 | consumed samples: 660480 | consumed tokens: 1352663040 | elapsed time per iteration (s): 1.26 | learning rate: 2.521E-05 | global batch size: 256 | lm loss: 3.348163E+00 | grad norm: 0.286 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.578 | TFLOPs: 49.02 | 7: iteration 2590/ 2891 | consumed samples: 663040 | consumed tokens: 1357905920 | elapsed time per iteration (s): 1.27 | learning rate: 2.489E-05 | global batch size: 256 | lm loss: 3.326427E+00 | grad norm: 0.377 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.151 | TFLOPs: 48.92 | 7: iteration 2600/ 2891 | consumed samples: 665600 | consumed tokens: 1363148800 | elapsed time per iteration (s): 1.26 | learning rate: 2.457E-05 | global batch size: 256 | lm loss: 3.324560E+00 | grad norm: 0.277 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.527 | TFLOPs: 49.01 | 7: iteration 2610/ 2891 | consumed samples: 668160 | consumed tokens: 1368391680 | elapsed time per iteration (s): 1.26 | learning rate: 2.427E-05 | global batch size: 256 | lm loss: 3.325753E+00 | grad norm: 0.283 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.409 | TFLOPs: 48.98 | 7: iteration 2620/ 2891 | consumed samples: 670720 | consumed tokens: 1373634560 | elapsed time per iteration (s): 1.26 | learning rate: 2.397E-05 | global batch size: 256 | lm loss: 3.330455E+00 | grad norm: 0.280 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.545 | TFLOPs: 49.01 | 7: iteration 2630/ 2891 | consumed samples: 673280 | consumed tokens: 1378877440 | elapsed time per iteration (s): 1.26 | learning rate: 2.369E-05 | global batch size: 256 | lm loss: 3.319068E+00 | grad norm: 0.277 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.504 | TFLOPs: 49.00 | 7: iteration 2640/ 2891 | consumed samples: 675840 | consumed tokens: 1384120320 | elapsed time per iteration (s): 1.26 | learning rate: 2.341E-05 | global batch size: 256 | lm loss: 3.348583E+00 | grad norm: 0.281 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.509 | TFLOPs: 49.01 | 7: iteration 2650/ 2891 | consumed samples: 678400 | consumed tokens: 1389363200 | elapsed time per iteration (s): 1.26 | learning rate: 2.315E-05 | global batch size: 256 | lm loss: 3.305918E+00 | grad norm: 0.268 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.398 | TFLOPs: 48.98 | 7: iteration 2660/ 2891 | consumed samples: 680960 | consumed tokens: 1394606080 | elapsed time per iteration (s): 1.26 | learning rate: 2.289E-05 | global batch size: 256 | lm loss: 3.380468E+00 | grad norm: 0.278 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.414 | TFLOPs: 48.98 | 7: iteration 2670/ 2891 | consumed samples: 683520 | consumed tokens: 1399848960 | elapsed time per iteration (s): 1.27 | learning rate: 2.265E-05 | global batch size: 256 | lm loss: 3.335890E+00 | grad norm: 0.268 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.903 | TFLOPs: 48.62 | 7: iteration 2680/ 2891 | consumed samples: 686080 | consumed tokens: 1405091840 | elapsed time per iteration (s): 1.27 | learning rate: 2.242E-05 | global batch size: 256 | lm loss: 3.317033E+00 | grad norm: 0.256 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.068 | TFLOPs: 48.90 | 7: iteration 2690/ 2891 | consumed samples: 688640 | consumed tokens: 1410334720 | elapsed time per iteration (s): 1.26 | learning rate: 2.220E-05 | global batch size: 256 | lm loss: 3.305687E+00 | grad norm: 0.273 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.442 | TFLOPs: 48.99 | 7: iteration 2700/ 2891 | consumed samples: 691200 | consumed tokens: 1415577600 | elapsed time per iteration (s): 1.26 | learning rate: 2.198E-05 | global batch size: 256 | lm loss: 3.325862E+00 | grad norm: 0.273 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.437 | TFLOPs: 48.99 | 7: iteration 2710/ 2891 | consumed samples: 693760 | consumed tokens: 1420820480 | elapsed time per iteration (s): 1.26 | learning rate: 2.178E-05 | global batch size: 256 | lm loss: 3.326694E+00 | grad norm: 0.258 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.475 | TFLOPs: 49.00 | 7: iteration 2720/ 2891 | consumed samples: 696320 | consumed tokens: 1426063360 | elapsed time per iteration (s): 1.26 | learning rate: 2.159E-05 | global batch size: 256 | lm loss: 3.313206E+00 | grad norm: 0.272 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.428 | TFLOPs: 48.99 | 7: iteration 2730/ 2891 | consumed samples: 698880 | consumed tokens: 1431306240 | elapsed time per iteration (s): 1.26 | learning rate: 2.141E-05 | global batch size: 256 | lm loss: 3.327401E+00 | grad norm: 0.285 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.428 | TFLOPs: 48.99 | 7: iteration 2740/ 2891 | consumed samples: 701440 | consumed tokens: 1436549120 | elapsed time per iteration (s): 1.26 | learning rate: 2.124E-05 | global batch size: 256 | lm loss: 3.306187E+00 | grad norm: 0.283 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.445 | TFLOPs: 48.99 | 7: iteration 2750/ 2891 | consumed samples: 704000 | consumed tokens: 1441792000 | elapsed time per iteration (s): 1.27 | learning rate: 2.109E-05 | global batch size: 256 | lm loss: 3.313600E+00 | grad norm: 0.288 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.227 | TFLOPs: 48.94 | 7: iteration 2760/ 2891 | consumed samples: 706560 | consumed tokens: 1447034880 | elapsed time per iteration (s): 1.26 | learning rate: 2.094E-05 | global batch size: 256 | lm loss: 3.341908E+00 | grad norm: 0.270 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.436 | TFLOPs: 48.99 | 7: iteration 2770/ 2891 | consumed samples: 709120 | consumed tokens: 1452277760 | elapsed time per iteration (s): 1.27 | learning rate: 2.080E-05 | global batch size: 256 | lm loss: 3.312650E+00 | grad norm: 0.278 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.126 | TFLOPs: 48.91 | 7: iteration 2780/ 2891 | consumed samples: 711680 | consumed tokens: 1457520640 | elapsed time per iteration (s): 1.27 | learning rate: 2.068E-05 | global batch size: 256 | lm loss: 3.307022E+00 | grad norm: 0.268 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.348 | TFLOPs: 48.97 | 7: iteration 2790/ 2891 | consumed samples: 714240 | consumed tokens: 1462763520 | elapsed time per iteration (s): 1.27 | learning rate: 2.056E-05 | global batch size: 256 | lm loss: 3.332689E+00 | grad norm: 0.274 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.304 | TFLOPs: 48.96 | 7: iteration 2800/ 2891 | consumed samples: 716800 | consumed tokens: 1468006400 | elapsed time per iteration (s): 1.27 | learning rate: 2.046E-05 | global batch size: 256 | lm loss: 3.286166E+00 | grad norm: 0.278 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.351 | TFLOPs: 48.97 | 7: iteration 2810/ 2891 | consumed samples: 719360 | consumed tokens: 1473249280 | elapsed time per iteration (s): 1.27 | learning rate: 2.036E-05 | global batch size: 256 | lm loss: 3.310330E+00 | grad norm: 0.273 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.255 | TFLOPs: 48.70 | 7: iteration 2820/ 2891 | consumed samples: 721920 | consumed tokens: 1478492160 | elapsed time per iteration (s): 1.27 | learning rate: 2.028E-05 | global batch size: 256 | lm loss: 3.319715E+00 | grad norm: 0.288 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.249 | TFLOPs: 48.94 | 7: iteration 2830/ 2891 | consumed samples: 724480 | consumed tokens: 1483735040 | elapsed time per iteration (s): 1.27 | learning rate: 2.021E-05 | global batch size: 256 | lm loss: 3.300797E+00 | grad norm: 0.294 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.346 | TFLOPs: 48.97 | 7: iteration 2840/ 2891 | consumed samples: 727040 | consumed tokens: 1488977920 | elapsed time per iteration (s): 1.27 | learning rate: 2.014E-05 | global batch size: 256 | lm loss: 3.316748E+00 | grad norm: 0.275 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.357 | TFLOPs: 48.97 | 7: iteration 2850/ 2891 | consumed samples: 729600 | consumed tokens: 1494220800 | elapsed time per iteration (s): 1.27 | learning rate: 2.009E-05 | global batch size: 256 | lm loss: 3.333828E+00 | grad norm: 0.286 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.007 | TFLOPs: 48.88 | 7: iteration 2860/ 2891 | consumed samples: 732160 | consumed tokens: 1499463680 | elapsed time per iteration (s): 1.26 | learning rate: 2.005E-05 | global batch size: 256 | lm loss: 3.319077E+00 | grad norm: 0.278 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.382 | TFLOPs: 48.97 | 7: iteration 2870/ 2891 | consumed samples: 734720 | consumed tokens: 1504706560 | elapsed time per iteration (s): 1.29 | learning rate: 2.003E-05 | global batch size: 256 | lm loss: 3.328903E+00 | grad norm: 0.274 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.256 | TFLOPs: 47.98 | 7: iteration 2880/ 2891 | consumed samples: 737280 | consumed tokens: 1509949440 | elapsed time per iteration (s): 1.27 | learning rate: 2.001E-05 | global batch size: 256 | lm loss: 3.298471E+00 | grad norm: 0.282 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.879 | TFLOPs: 48.85 | 7: iteration 2890/ 2891 | consumed samples: 739840 | consumed tokens: 1515192320 | elapsed time per iteration (s): 1.27 | learning rate: 2.000E-05 | global batch size: 256 | lm loss: 3.297031E+00 | grad norm: 0.271 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.822 | TFLOPs: 48.60 | 0: [after training is done] datetime: 2022-11-29 18:54:13 0: saving checkpoint at iteration 2891 to checkpoints_1b1oscar 7: ------------------------------------------------------------------------------------------------------------ 7: valid loss at the end of training for val data | lm loss value: 3.137404E+00 | lm loss PPL: 2.304397E+01 | 7: ------------------------------------------------------------------------------------------------------------ 0: [2022-11-29 18:54:13,711] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step2891 is begin to save! 0: [2022-11-29 18:54:13,714] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_01-model_00-model_states.pt... 0: [2022-11-29 18:54:13,915] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_01-model_00-model_states.pt. 0: [2022-11-29 18:54:13,916] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_03-model_00-model_states.pt... 0: [2022-11-29 18:54:13,995] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_03-model_00-model_states.pt. 0: [2022-11-29 18:54:13,995] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_04-model_00-model_states.pt... 0: [2022-11-29 18:54:14,069] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_04-model_00-model_states.pt. 0: [2022-11-29 18:54:14,069] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_05-model_00-model_states.pt... 0: [2022-11-29 18:54:14,143] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_05-model_00-model_states.pt. 0: [2022-11-29 18:54:14,143] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_06-model_00-model_states.pt... 0: [2022-11-29 18:54:14,218] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_06-model_00-model_states.pt. 0: [2022-11-29 18:54:14,218] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_07-model_00-model_states.pt... 0: [2022-11-29 18:54:14,293] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_07-model_00-model_states.pt. 0: [2022-11-29 18:54:14,294] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_08-model_00-model_states.pt... 0: [2022-11-29 18:54:14,370] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_08-model_00-model_states.pt. 0: [2022-11-29 18:54:14,370] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_09-model_00-model_states.pt... 0: [2022-11-29 18:54:14,443] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_09-model_00-model_states.pt. 0: [2022-11-29 18:54:14,443] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_10-model_00-model_states.pt... 0: [2022-11-29 18:54:14,518] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_10-model_00-model_states.pt. 0: [2022-11-29 18:54:14,518] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_11-model_00-model_states.pt... 0: [2022-11-29 18:54:14,591] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_11-model_00-model_states.pt. 0: [2022-11-29 18:54:14,591] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_12-model_00-model_states.pt... 0: [2022-11-29 18:54:14,668] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_12-model_00-model_states.pt. 0: [2022-11-29 18:54:14,669] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_13-model_00-model_states.pt... 0: [2022-11-29 18:54:14,744] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_13-model_00-model_states.pt. 0: [2022-11-29 18:54:14,745] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_14-model_00-model_states.pt... 0: [2022-11-29 18:54:14,817] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_14-model_00-model_states.pt. 0: [2022-11-29 18:54:14,818] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_15-model_00-model_states.pt... 0: [2022-11-29 18:54:14,895] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_15-model_00-model_states.pt. 0: [2022-11-29 18:54:14,896] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_16-model_00-model_states.pt... 0: [2022-11-29 18:54:14,968] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_16-model_00-model_states.pt. 0: [2022-11-29 18:54:14,969] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_17-model_00-model_states.pt... 0: [2022-11-29 18:54:15,045] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_17-model_00-model_states.pt. 0: [2022-11-29 18:54:15,046] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_18-model_00-model_states.pt... 0: [2022-11-29 18:54:15,122] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_18-model_00-model_states.pt. 0: [2022-11-29 18:54:15,123] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_19-model_00-model_states.pt... 0: [2022-11-29 18:54:15,197] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_19-model_00-model_states.pt. 0: [2022-11-29 18:54:15,197] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_20-model_00-model_states.pt... 0: [2022-11-29 18:54:15,274] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_20-model_00-model_states.pt. 0: [2022-11-29 18:54:15,274] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_21-model_00-model_states.pt... 0: [2022-11-29 18:54:15,346] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_21-model_00-model_states.pt. 0: [2022-11-29 18:54:15,346] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_22-model_00-model_states.pt... 0: [2022-11-29 18:54:15,422] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_22-model_00-model_states.pt. 0: [2022-11-29 18:54:15,423] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_23-model_00-model_states.pt... 0: [2022-11-29 18:54:15,494] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_23-model_00-model_states.pt. 0: [2022-11-29 18:54:15,495] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_24-model_00-model_states.pt... 0: [2022-11-29 18:54:15,570] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_24-model_00-model_states.pt. 0: [2022-11-29 18:54:15,570] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_25-model_00-model_states.pt... 0: [2022-11-29 18:54:15,643] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_25-model_00-model_states.pt. 0: [2022-11-29 18:54:15,643] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_26-model_00-model_states.pt... 0: [2022-11-29 18:54:15,719] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_26-model_00-model_states.pt. 0: [2022-11-29 18:54:15,720] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_27-model_00-model_states.pt... 0: [2022-11-29 18:54:15,795] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_27-model_00-model_states.pt. 0: [2022-11-29 18:54:15,795] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_28-model_00-model_states.pt... 0: [2022-11-29 18:54:15,867] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_28-model_00-model_states.pt. 0: [2022-11-29 18:54:15,867] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/layer_30-model_00-model_states.pt... 0: [2022-11-29 18:54:15,870] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/layer_30-model_00-model_states.pt. 0: [2022-11-29 18:54:15,872] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: checkpoints_1b1oscar/global_step2891/mp_rank_00_model_states.pt 0: [2022-11-29 18:54:15,872] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/mp_rank_00_model_states.pt... 0: [2022-11-29 18:54:15,874] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/mp_rank_00_model_states.pt. 0: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt... 2: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt... 3: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt... 6: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt... 7: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt... 0: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt... 4: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt... 5: [2022-11-29 18:54:15,894] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt... 1: [2022-11-29 18:54:16,139] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:54:16,139] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt 1: [2022-11-29 18:54:16,139] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 4: [2022-11-29 18:54:16,140] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:54:16,140] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt 4: [2022-11-29 18:54:16,140] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 6: [2022-11-29 18:54:16,142] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:54:16,142] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt 6: [2022-11-29 18:54:16,142] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 1: [2022-11-29 18:54:16,144] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:54:16,145] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:54:16,145] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt 1: [2022-11-29 18:54:16,145] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 6: [2022-11-29 18:54:16,145] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:54:16,145] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt 6: [2022-11-29 18:54:16,145] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt 4: [2022-11-29 18:54:16,145] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 6: [2022-11-29 18:54:16,145] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 7: [2022-11-29 18:54:16,150] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:54:16,151] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt 7: [2022-11-29 18:54:16,151] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 3: [2022-11-29 18:54:16,151] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt. 3: [2022-11-29 18:54:16,151] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt 3: [2022-11-29 18:54:16,151] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 6: [2022-11-29 18:54:16,151] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:54:16,151] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt 6: [2022-11-29 18:54:16,151] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 1: [2022-11-29 18:54:16,156] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:54:16,156] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt 1: [2022-11-29 18:54:16,156] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 1: [2022-11-29 18:54:16,156] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:54:16,156] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt 1: [2022-11-29 18:54:16,156] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 5: [2022-11-29 18:54:16,157] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:54:16,157] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt 5: [2022-11-29 18:54:16,157] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 7: [2022-11-29 18:54:16,157] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:54:16,157] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt 7: [2022-11-29 18:54:16,157] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 0: [2022-11-29 18:54:16,159] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:54:16,159] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt 0: [2022-11-29 18:54:16,159] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 2: [2022-11-29 18:54:16,159] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:54:16,160] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:54:16,160] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt 0: [2022-11-29 18:54:16,160] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 5: [2022-11-29 18:54:16,161] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:54:16,161] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt 5: [2022-11-29 18:54:16,161] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 0: [2022-11-29 18:54:16,161] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:54:16,162] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:54:16,162] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt 4: [2022-11-29 18:54:16,162] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 4: [2022-11-29 18:54:16,163] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:54:16,163] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt 4: [2022-11-29 18:54:16,163] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 1: [2022-11-29 18:54:16,163] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:54:16,163] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt 1: [2022-11-29 18:54:16,163] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 7: [2022-11-29 18:54:16,164] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:54:16,164] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt 7: [2022-11-29 18:54:16,164] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 0: [2022-11-29 18:54:16,165] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:54:16,165] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt 0: [2022-11-29 18:54:16,166] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 4: [2022-11-29 18:54:16,166] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:54:16,166] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:54:16,166] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt 4: [2022-11-29 18:54:16,167] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt 4: [2022-11-29 18:54:16,167] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 4: [2022-11-29 18:54:16,167] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 5: [2022-11-29 18:54:16,167] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:54:16,167] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt 3: [2022-11-29 18:54:16,167] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:54:16,167] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 7: [2022-11-29 18:54:16,170] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:54:16,171] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt 7: [2022-11-29 18:54:16,171] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 1: [2022-11-29 18:54:16,175] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:54:16,175] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:54:16,175] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt 1: [2022-11-29 18:54:16,175] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt 1: [2022-11-29 18:54:16,175] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 1: [2022-11-29 18:54:16,175] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 4: [2022-11-29 18:54:16,175] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:54:16,175] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt 4: [2022-11-29 18:54:16,175] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 1: [2022-11-29 18:54:16,176] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt. 1: [2022-11-29 18:54:16,176] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt 1: [2022-11-29 18:54:16,176] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 0: [2022-11-29 18:54:16,176] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:54:16,176] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt 0: [2022-11-29 18:54:16,176] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 3: [2022-11-29 18:54:16,167] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt 2: [2022-11-29 18:54:16,159] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt 3: [2022-11-29 18:54:16,167] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 2: [2022-11-29 18:54:16,159] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 2: [2022-11-29 18:54:16,165] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt. 2: [2022-11-29 18:54:16,165] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt 2: [2022-11-29 18:54:16,165] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 2: [2022-11-29 18:54:16,167] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt. 2: [2022-11-29 18:54:16,167] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt 2: [2022-11-29 18:54:16,167] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 2: [2022-11-29 18:54:16,169] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt. 2: [2022-11-29 18:54:16,169] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt 2: [2022-11-29 18:54:16,169] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt. 2: [2022-11-29 18:54:16,169] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 2: [2022-11-29 18:54:16,169] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt 2: [2022-11-29 18:54:16,169] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 2: [2022-11-29 18:54:16,170] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt. 2: [2022-11-29 18:54:16,170] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt 2: [2022-11-29 18:54:16,170] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 7: [2022-11-29 18:54:16,179] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt. 4: [2022-11-29 18:54:16,179] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:54:16,179] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt 7: [2022-11-29 18:54:16,179] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 4: [2022-11-29 18:54:16,179] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt 4: [2022-11-29 18:54:16,179] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 6: [2022-11-29 18:54:16,179] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:54:16,179] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt 6: [2022-11-29 18:54:16,179] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 0: [2022-11-29 18:54:16,182] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:54:16,182] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt 0: [2022-11-29 18:54:16,182] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 0: [2022-11-29 18:54:16,182] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:54:16,182] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:54:16,182] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt 0: [2022-11-29 18:54:16,182] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt 0: [2022-11-29 18:54:16,182] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 3: [2022-11-29 18:54:16,182] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt. 0: [2022-11-29 18:54:16,182] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 3: [2022-11-29 18:54:16,182] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt 3: [2022-11-29 18:54:16,182] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 7: [2022-11-29 18:54:16,183] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt. 3: [2022-11-29 18:54:16,183] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:54:16,183] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt. 3: [2022-11-29 18:54:16,183] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt 7: [2022-11-29 18:54:16,183] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt 7: [2022-11-29 18:54:16,183] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt 7: [2022-11-29 18:54:16,183] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 3: [2022-11-29 18:54:16,183] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 7: [2022-11-29 18:54:16,183] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 3: [2022-11-29 18:54:16,183] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt. 3: [2022-11-29 18:54:16,183] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt 3: [2022-11-29 18:54:16,183] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 7: [2022-11-29 18:54:16,183] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt. 7: [2022-11-29 18:54:16,183] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt 7: [2022-11-29 18:54:16,183] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 6: [2022-11-29 18:54:16,185] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:54:16,185] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt 6: [2022-11-29 18:54:16,185] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 2: [2022-11-29 18:54:16,186] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt. 3: [2022-11-29 18:54:16,192] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:54:16,192] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:54:16,192] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt 6: [2022-11-29 18:54:16,192] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 5: [2022-11-29 18:54:16,194] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:54:16,194] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt 5: [2022-11-29 18:54:16,194] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 5: [2022-11-29 18:54:16,195] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:54:16,195] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt 5: [2022-11-29 18:54:16,195] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 5: [2022-11-29 18:54:16,195] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:54:16,196] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt 5: [2022-11-29 18:54:16,196] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 6: [2022-11-29 18:54:16,198] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:54:16,198] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt 6: [2022-11-29 18:54:16,198] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 6: [2022-11-29 18:54:16,198] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt. 6: [2022-11-29 18:54:16,198] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt 6: [2022-11-29 18:54:16,198] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 3: [2022-11-29 18:54:16,192] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt 3: [2022-11-29 18:54:16,192] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 3: [2022-11-29 18:54:16,195] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt. 3: [2022-11-29 18:54:16,195] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt 3: [2022-11-29 18:54:16,195] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 3: [2022-11-29 18:54:16,195] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt. 3: [2022-11-29 18:54:16,195] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt 3: [2022-11-29 18:54:16,195] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 0: [2022-11-29 18:54:16,202] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt 0: [2022-11-29 18:54:16,202] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 5: [2022-11-29 18:54:16,212] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:54:16,212] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt 5: [2022-11-29 18:54:16,212] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 5: [2022-11-29 18:54:16,212] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt. 2: [2022-11-29 18:54:16,186] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt. 5: [2022-11-29 18:54:16,213] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt 2: [2022-11-29 18:54:16,186] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt 2: [2022-11-29 18:54:16,186] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1oscar/global_step2891/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt 2: [2022-11-29 18:54:16,186] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 2: [2022-11-29 18:54:16,186] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 5: [2022-11-29 18:54:16,213] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 0: successfully saved checkpoint at iteration 2891 to checkpoints_1b1oscar 7: ------------------------------------------------------------------------------------------------------------ 7: test loss at the end of training for test data | lm loss value: 3.239900E+00 | lm loss PPL: 2.553117E+01 | 7: ------------------------------------------------------------------------------------------------------------ END 2085640: Tue Nov 29 18:54:26 EET 2022