GindaChen commited on
Commit
cf7161e
·
verified ·
1 Parent(s): e9c6e36

Upload folder using huggingface_hub

Browse files
attnserver.run_attnserver.slurm.sh.343188.err.log CHANGED
@@ -75207,3 +75207,501 @@ W0621 20:29:13.908000 2640426 site-packages/torch/distributed/run.py:766] ******
75207
  warnings.warn(
75208
  /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75209
  warnings.warn(
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75207
  warnings.warn(
75208
  /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75209
  warnings.warn(
75210
+ [rank3]:[W621 20:51:00.854953166 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75211
+ [rank2]:[W621 20:51:00.962299066 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75212
+ [rank5]:[W621 20:51:00.148150674 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75213
+ [rank4]:[W621 20:51:00.210070465 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75214
+ [rank6]:[W621 20:51:00.240571905 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75215
+ [rank1]:[W621 20:51:00.276397275 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75216
+ [rank7]:[W621 20:51:00.345032380 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75217
+ [rank60]:[W621 20:51:00.237124405 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75218
+ [rank62]:[W621 20:51:00.238716824 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75219
+ [rank33]:[W621 20:51:00.635726167 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75220
+ [rank61]:[W621 20:51:00.282239122 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75221
+ [rank38]:[W621 20:51:00.643733050 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75222
+ [rank23]:[W621 20:51:00.898463302 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75223
+ [rank18]:[W621 20:51:00.912562414 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75224
+ [rank46]:[W621 20:51:00.801105792 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75225
+ [rank9]:[W621 20:51:00.469320970 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75226
+ [rank21]:[W621 20:51:00.939993683 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75227
+ [rank26]:[W621 20:51:01.360346997 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75228
+ [rank14]:[W621 20:51:01.487399186 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75229
+ [rank42]:[W621 20:51:01.827250154 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75230
+ [rank58]:[W621 20:51:01.356083014 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75231
+ [rank17]:[W621 20:51:01.959418367 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75232
+ [rank50]:[W621 20:51:01.817275054 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75233
+ [rank22]:[W621 20:51:01.967142521 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75234
+ [rank54]:[W621 20:51:01.821237769 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75235
+ [rank15]:[W621 20:51:01.509369036 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75236
+ [rank57]:[W621 20:51:01.372247127 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75237
+ [rank12]:[W621 20:51:01.515477029 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75238
+ [rank31]:[W621 20:51:01.396093787 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75239
+ [rank25]:[W621 20:51:01.396322335 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75240
+ [rank27]:[W621 20:51:01.397459955 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75241
+ [rank49]:[W621 20:51:01.843552125 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75242
+ [rank41]:[W621 20:51:01.863965977 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75243
+ [rank29]:[W621 20:51:01.407167600 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75244
+ [rank19]:[W621 20:51:01.990829480 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75245
+ [rank30]:[W621 20:51:01.414607172 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75246
+ [rank55]:[W621 20:51:01.853249639 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75247
+ [rank63]:[W621 20:51:01.402243001 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75248
+ [rank47]:[W621 20:51:01.873861048 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75249
+ [rank13]:[W621 20:51:01.540720257 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75250
+ [rank52]:[W621 20:51:01.859351296 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75251
+ [rank45]:[W621 20:51:01.884741890 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75252
+ [rank43]:[W621 20:51:01.885426808 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75253
+ [rank44]:[W621 20:51:01.889842120 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75254
+ [rank37]:[W621 20:51:01.798350811 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75255
+ [rank59]:[W621 20:51:01.454390358 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75256
+ [rank20]:[W621 20:51:01.086150427 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75257
+ [rank53]:[W621 20:51:01.945499207 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75258
+ [rank28]:[W621 20:51:01.523242503 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75259
+ [rank36]:[W621 20:51:01.894238318 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75260
+ [rank11]:[W621 20:51:01.682480437 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75261
+ [rank10]:[W621 20:51:01.684795079 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75262
+ [rank35]:[W621 20:51:01.909892809 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75263
+ [rank34]:[W621 20:51:01.932308519 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75264
+ [rank39]:[W621 20:51:01.938369757 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75265
+ [rank51]:[W621 20:51:01.188016719 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75266
+ [rank56]:[W621 20:51:02.542790667 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75267
+ [rank0]:[W621 20:51:02.310916391 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75268
+ [rank16]:[W621 20:51:02.422025442 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75269
+ [rank32]:[W621 20:51:02.320639160 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75270
+ [rank48]:[W621 20:51:02.508199408 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75271
+ [rank24]:[W621 20:51:03.423340750 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75272
+ [rank40]:[W621 20:51:03.961716718 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75273
+ [rank8]:[W621 20:51:04.258972183 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
75274
+ W0621 20:51:32.176000 4147360 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1341] The node 'fs-mbz-gpu-627_4147360_0' has failed to send a keep-alive heartbeat to the rendezvous '343188' due to an error of type RendezvousTimeoutError.
75275
+ W0621 20:51:32.176000 3009854 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1341] The node 'fs-mbz-gpu-584_3009854_0' has failed to send a keep-alive heartbeat to the rendezvous '343188' due to an error of type RendezvousTimeoutError.
75276
+ W0621 20:51:32.179000 2640426 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1341] The node 'fs-mbz-gpu-679_2640426_0' has failed to send a keep-alive heartbeat to the rendezvous '343188' due to an error of type RendezvousTimeoutError.
75277
+ W0621 20:51:32.180000 2741687 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1341] The node 'fs-mbz-gpu-685_2741687_0' has failed to send a keep-alive heartbeat to the rendezvous '343188' due to an error of type RendezvousTimeoutError.
75278
+ W0621 20:51:32.179000 2823348 site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py:1341] The node 'fs-mbz-gpu-573_2823348_0' has failed to send a keep-alive heartbeat to the rendezvous '343188' due to an error of type RendezvousTimeoutError.
75279
+ + set +x
75280
+ + set +x
75281
+ + set +x
75282
+ + set +x
75283
+ + set +x
75284
+ + set +x
75285
+ + set +x
75286
+ + set +x
75287
+ + for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
75288
+ + export PROF_CTX_LENGTH=131072
75289
+ + PROF_CTX_LENGTH=131072
75290
+ + name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L131072*tp8.cp8.bs1.json'
75291
+ + '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L131072*tp8.cp8.bs1.json' ']'
75292
+ + echo 'Running ctx_length=131072, TP_SIZE=8, CP_SIZE=8, BATCH_SIZE=1'
75293
+ + srun bash ./attnserver.sh
75294
+ + which python3
75295
+ + which python3
75296
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 8 --node_rank 3 --rdzv_id 343188 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-518:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 131072 --max-position-embeddings 131072 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
75297
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 8 --node_rank 5 --rdzv_id 343188 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-518:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 131072 --max-position-embeddings 131072 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
75298
+ + which python3
75299
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 8 --node_rank 2 --rdzv_id 343188 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-518:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 131072 --max-position-embeddings 131072 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
75300
+ + which python3
75301
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 8 --node_rank 4 --rdzv_id 343188 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-518:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 131072 --max-position-embeddings 131072 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
75302
+ + which python3
75303
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 8 --node_rank 6 --rdzv_id 343188 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-518:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 131072 --max-position-embeddings 131072 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
75304
+ + which python3
75305
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 8 --node_rank 7 --rdzv_id 343188 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-518:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 131072 --max-position-embeddings 131072 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
75306
+ + which python3
75307
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 8 --node_rank 0 --rdzv_id 343188 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-518:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 131072 --max-position-embeddings 131072 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
75308
+ + which python3
75309
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 8 --node_rank 1 --rdzv_id 343188 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-518:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 131072 --max-position-embeddings 131072 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
75310
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
75311
+ and will be removed in future. Use torchrun.
75312
+ Note that --use-env is set by default in torchrun.
75313
+ If your script expects `--local-rank` argument to be set, please
75314
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
75315
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
75316
+ further instructions
75317
+
75318
+ main()
75319
+ W0621 20:51:35.301000 2646978 site-packages/torch/distributed/run.py:766]
75320
+ W0621 20:51:35.301000 2646978 site-packages/torch/distributed/run.py:766] *****************************************
75321
+ W0621 20:51:35.301000 2646978 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
75322
+ W0621 20:51:35.301000 2646978 site-packages/torch/distributed/run.py:766] *****************************************
75323
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
75324
+ and will be removed in future. Use torchrun.
75325
+ Note that --use-env is set by default in torchrun.
75326
+ If your script expects `--local-rank` argument to be set, please
75327
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
75328
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
75329
+ further instructions
75330
+
75331
+ main()
75332
+ W0621 20:51:35.339000 3016023 site-packages/torch/distributed/run.py:766]
75333
+ W0621 20:51:35.339000 3016023 site-packages/torch/distributed/run.py:766] *****************************************
75334
+ W0621 20:51:35.339000 3016023 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
75335
+ W0621 20:51:35.339000 3016023 site-packages/torch/distributed/run.py:766] *****************************************
75336
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
75337
+ and will be removed in future. Use torchrun.
75338
+ Note that --use-env is set by default in torchrun.
75339
+ If your script expects `--local-rank` argument to be set, please
75340
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
75341
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
75342
+ further instructions
75343
+
75344
+ main()
75345
+ W0621 20:51:35.347000 4153513 site-packages/torch/distributed/run.py:766]
75346
+ W0621 20:51:35.347000 4153513 site-packages/torch/distributed/run.py:766] *****************************************
75347
+ W0621 20:51:35.347000 4153513 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
75348
+ W0621 20:51:35.347000 4153513 site-packages/torch/distributed/run.py:766] *****************************************
75349
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
75350
+ and will be removed in future. Use torchrun.
75351
+ Note that --use-env is set by default in torchrun.
75352
+ If your script expects `--local-rank` argument to be set, please
75353
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
75354
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
75355
+ further instructions
75356
+
75357
+ main()
75358
+ W0621 20:51:35.427000 2747591 site-packages/torch/distributed/run.py:766]
75359
+ W0621 20:51:35.427000 2747591 site-packages/torch/distributed/run.py:766] *****************************************
75360
+ W0621 20:51:35.427000 2747591 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
75361
+ W0621 20:51:35.427000 2747591 site-packages/torch/distributed/run.py:766] *****************************************
75362
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
75363
+ and will be removed in future. Use torchrun.
75364
+ Note that --use-env is set by default in torchrun.
75365
+ If your script expects `--local-rank` argument to be set, please
75366
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
75367
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
75368
+ further instructions
75369
+
75370
+ main()
75371
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
75372
+ and will be removed in future. Use torchrun.
75373
+ Note that --use-env is set by default in torchrun.
75374
+ If your script expects `--local-rank` argument to be set, please
75375
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
75376
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
75377
+ further instructions
75378
+
75379
+ main()
75380
+ W0621 20:51:35.430000 2829530 site-packages/torch/distributed/run.py:766]
75381
+ W0621 20:51:35.430000 2829530 site-packages/torch/distributed/run.py:766] *****************************************
75382
+ W0621 20:51:35.430000 2829530 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
75383
+ W0621 20:51:35.430000 2829530 site-packages/torch/distributed/run.py:766] *****************************************
75384
+ W0621 20:51:35.431000 2532869 site-packages/torch/distributed/run.py:766]
75385
+ W0621 20:51:35.431000 2532869 site-packages/torch/distributed/run.py:766] *****************************************
75386
+ W0621 20:51:35.431000 2532869 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
75387
+ W0621 20:51:35.431000 2532869 site-packages/torch/distributed/run.py:766] *****************************************
75388
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
75389
+ and will be removed in future. Use torchrun.
75390
+ Note that --use-env is set by default in torchrun.
75391
+ If your script expects `--local-rank` argument to be set, please
75392
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
75393
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
75394
+ further instructions
75395
+
75396
+ main()
75397
+ W0621 20:51:35.741000 3477073 site-packages/torch/distributed/run.py:766]
75398
+ W0621 20:51:35.741000 3477073 site-packages/torch/distributed/run.py:766] *****************************************
75399
+ W0621 20:51:35.741000 3477073 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
75400
+ W0621 20:51:35.741000 3477073 site-packages/torch/distributed/run.py:766] *****************************************
75401
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
75402
+ and will be removed in future. Use torchrun.
75403
+ Note that --use-env is set by default in torchrun.
75404
+ If your script expects `--local-rank` argument to be set, please
75405
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
75406
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
75407
+ further instructions
75408
+
75409
+ main()
75410
+ W0621 20:51:35.866000 2709588 site-packages/torch/distributed/run.py:766]
75411
+ W0621 20:51:35.866000 2709588 site-packages/torch/distributed/run.py:766] *****************************************
75412
+ W0621 20:51:35.866000 2709588 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
75413
+ W0621 20:51:35.866000 2709588 site-packages/torch/distributed/run.py:766] *****************************************
75414
+ [rank52]:[W621 20:51:59.983087728 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 52] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75415
+ [rank4]:[W621 20:51:59.022703445 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75416
+ [rank20]:[W621 20:51:59.130007385 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 20] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75417
+ [rank7]:[W621 20:51:59.024286389 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75418
+ [rank12]:[W621 20:51:59.671595349 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 12] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75419
+ [rank55]:[W621 20:51:59.985328575 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 55] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75420
+ [rank51]:[W621 20:51:59.985532883 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 51] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75421
+ [rank6]:[W621 20:51:59.025474261 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75422
+ [rank60]:[W621 20:51:59.536155277 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 60] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75423
+ [rank19]:[W621 20:51:59.133137568 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 19] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75424
+ [rank11]:[W621 20:51:59.673013247 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 11] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75425
+ [rank28]:[W621 20:51:59.549887382 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 28] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75426
+ [rank44]:[W621 20:51:59.007474896 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 44] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75427
+ [rank23]:[W621 20:51:59.133159110 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 23] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75428
+ [rank15]:[W621 20:51:59.673037290 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 15] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75429
+ [rank31]:[W621 20:51:59.550137481 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 31] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75430
+ [rank3]:[W621 20:51:59.026841203 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75431
+ [rank36]:[W621 20:51:59.891021323 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 36] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75432
+ [rank39]:[W621 20:51:59.891300334 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 39] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75433
+ [rank27]:[W621 20:51:59.550908931 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 27] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75434
+ [rank63]:[W621 20:51:59.537824447 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 63] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75435
+ [rank14]:[W621 20:51:59.674920127 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 14] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75436
+ [rank43]:[W621 20:51:59.008987817 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 43] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75437
+ [rank47]:[W621 20:51:59.008987962 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 47] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75438
+ [rank59]:[W621 20:51:59.537845925 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 59] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75439
+ [rank35]:[W621 20:51:59.892767018 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 35] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75440
+ [rank62]:[W621 20:51:59.537896744 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 62] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75441
+ [rank30]:[W621 20:51:59.552569948 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 30] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75442
+ [rank57]:[W621 20:51:59.538883395 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 57] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75443
+ [rank46]:[W621 20:51:59.010562257 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 46] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75444
+ [rank17]:[W621 20:51:59.136959245 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 17] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75445
+ [rank49]:[W621 20:51:59.991189707 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 49] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75446
+ [rank38]:[W621 20:51:59.894650090 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 38] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75447
+ [rank22]:[W621 20:51:59.137294574 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 22] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75448
+ [rank54]:[W621 20:51:59.991341003 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 54] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75449
+ [rank1]:[W621 20:51:59.031052509 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75450
+ [rank9]:[W621 20:51:59.678553960 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 9] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75451
+ [rank25]:[W621 20:51:59.556339023 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 25] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75452
+ [rank33]:[W621 20:51:59.897628448 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 33] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75453
+ [rank41]:[W621 20:51:59.017842036 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 41] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75454
+ [rank16]:[W621 20:51:59.339013061 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 16] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75455
+ [rank56]:[W621 20:51:59.744846326 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 56] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75456
+ [rank48]:[W621 20:51:59.195350293 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 48] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75457
+ [rank8]:[W621 20:51:59.885001921 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75458
+ [rank24]:[W621 20:51:59.761289668 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 24] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75459
+ [rank32]:[W621 20:51:59.106292173 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 32] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75460
+ [rank0]:[W621 20:51:59.274235798 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75461
+ [rank40]:[W621 20:51:59.351419603 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 40] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75462
+ [rank2]:[W621 20:51:59.403643110 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75463
+ [rank18]:[W621 20:51:59.511736725 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 18] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75464
+ [rank50]:[W621 20:51:59.365951379 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 50] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75465
+ [rank58]:[W621 20:51:59.916216107 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 58] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75466
+ [rank10]:[W621 20:51:59.052939357 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 10] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75467
+ [rank34]:[W621 20:51:59.270164314 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 34] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75468
+ [rank42]:[W621 20:51:59.386943105 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 42] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75469
+ [rank26]:[W621 20:51:59.930506225 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 26] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75470
+ [rank5]:[W621 20:51:59.419706903 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75471
+ [rank61]:[W621 20:51:59.930719524 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 61] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75472
+ [rank21]:[W621 20:51:59.528397152 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 21] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75473
+ [rank13]:[W621 20:51:59.068656081 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 13] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75474
+ [rank37]:[W621 20:51:59.286582431 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 37] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75475
+ [rank29]:[W621 20:51:59.946702560 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 29] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75476
+ [rank53]:[W621 20:51:59.381632220 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 53] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75477
+ [rank45]:[W621 20:51:59.403842874 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 45] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
75478
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75479
+ warnings.warn(
75480
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75481
+ warnings.warn(
75482
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75483
+ warnings.warn(
75484
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75485
+ warnings.warn(
75486
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75487
+ warnings.warn(
75488
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75489
+ warnings.warn(
75490
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75491
+ warnings.warn(
75492
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75493
+ warnings.warn(
75494
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75495
+ warnings.warn(
75496
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75497
+ warnings.warn(
75498
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75499
+ warnings.warn(
75500
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75501
+ warnings.warn(
75502
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75503
+ warnings.warn(
75504
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75505
+ warnings.warn(
75506
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75507
+ warnings.warn(
75508
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75509
+ warnings.warn(
75510
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75511
+ warnings.warn(
75512
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75513
+ warnings.warn(
75514
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75515
+ warnings.warn(
75516
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75517
+ warnings.warn(
75518
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75519
+ warnings.warn(
75520
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75521
+ warnings.warn(
75522
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75523
+ warnings.warn(
75524
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75525
+ warnings.warn(
75526
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75527
+ warnings.warn(
75528
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75529
+ warnings.warn(
75530
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75531
+ warnings.warn(
75532
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75533
+ warnings.warn(
75534
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75535
+ warnings.warn(
75536
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75537
+ warnings.warn(
75538
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75539
+ warnings.warn(
75540
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75541
+ warnings.warn(
75542
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75543
+ warnings.warn(
75544
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75545
+ warnings.warn(
75546
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75547
+ warnings.warn(
75548
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75549
+ warnings.warn(
75550
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75551
+ warnings.warn(
75552
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75553
+ warnings.warn(
75554
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75555
+ warnings.warn(
75556
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75557
+ warnings.warn(
75558
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75559
+ warnings.warn(
75560
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75561
+ warnings.warn(
75562
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75563
+ warnings.warn(
75564
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75565
+ warnings.warn(
75566
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75567
+ warnings.warn(
75568
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75569
+ warnings.warn(
75570
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75571
+ warnings.warn(
75572
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75573
+ warnings.warn(
75574
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75575
+ warnings.warn(
75576
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75577
+ warnings.warn(
75578
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75579
+ warnings.warn(
75580
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75581
+ warnings.warn(
75582
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75583
+ warnings.warn(
75584
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75585
+ warnings.warn(
75586
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75587
+ warnings.warn(
75588
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75589
+ warnings.warn(
75590
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75591
+ warnings.warn(
75592
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75593
+ warnings.warn(
75594
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75595
+ warnings.warn(
75596
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75597
+ warnings.warn(
75598
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75599
+ warnings.warn(
75600
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75601
+ warnings.warn(
75602
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75603
+ warnings.warn(
75604
+ /mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
75605
+ warnings.warn(
75606
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75607
+ warnings.warn(
75608
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75609
+ warnings.warn(
75610
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75611
+ warnings.warn(
75612
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75613
+ warnings.warn(
75614
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75615
+ warnings.warn(
75616
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75617
+ warnings.warn(
75618
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75619
+ warnings.warn(
75620
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75621
+ warnings.warn(
75622
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75623
+ warnings.warn(
75624
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75625
+ warnings.warn(
75626
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75627
+ warnings.warn(
75628
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75629
+ warnings.warn(
75630
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75631
+ warnings.warn(
75632
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75633
+ warnings.warn(
75634
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75635
+ warnings.warn(
75636
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75637
+ warnings.warn(
75638
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75639
+ warnings.warn(
75640
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75641
+ warnings.warn(
75642
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75643
+ warnings.warn(
75644
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75645
+ warnings.warn(
75646
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75647
+ warnings.warn(
75648
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75649
+ warnings.warn(
75650
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75651
+ warnings.warn(
75652
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75653
+ warnings.warn(
75654
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75655
+ warnings.warn(
75656
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75657
+ warnings.warn(
75658
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75659
+ warnings.warn(
75660
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75661
+ warnings.warn(
75662
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75663
+ warnings.warn(
75664
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75665
+ warnings.warn(
75666
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75667
+ warnings.warn(
75668
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75669
+ warnings.warn(
75670
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75671
+ warnings.warn(
75672
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75673
+ warnings.warn(
75674
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75675
+ warnings.warn(
75676
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75677
+ warnings.warn(
75678
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75679
+ warnings.warn(
75680
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75681
+ warnings.warn(
75682
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75683
+ warnings.warn(
75684
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75685
+ warnings.warn(
75686
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75687
+ warnings.warn(
75688
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75689
+ warnings.warn(
75690
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75691
+ warnings.warn(
75692
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75693
+ warnings.warn(
75694
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75695
+ warnings.warn(
75696
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75697
+ warnings.warn(
75698
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75699
+ warnings.warn(
75700
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75701
+ warnings.warn(
75702
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75703
+ warnings.warn(
75704
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75705
+ warnings.warn(
75706
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
75707
+ warnings.warn(
attnserver.run_attnserver.slurm.sh.343188.out.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343190.err.log CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:06b6df0d520560b899bfde0546eaca2a6c6a4cbc65c8ce80951bcc9df2055b79
3
- size 26219172
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2388f4b1471e8ae64168a57c25acec4d59b3fd54e3986775d4919c73876db0ae
3
+ size 26744646
attnserver.run_attnserver.slurm.sh.343190.out.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343193.err.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343193.out.log CHANGED
The diff for this file is too large to render. See raw diff
 
attnserver.run_attnserver.slurm.sh.343194.err.log ADDED
@@ -0,0 +1,310 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ + source /mnt/weka/home/hao.zhang/conda/miniconda/bin/activate
2
+ ++ _CONDA_ROOT=/mnt/weka/home/hao.zhang/conda/miniconda
3
+ ++ . /mnt/weka/home/hao.zhang/conda/miniconda/etc/profile.d/conda.sh
4
+ +++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
5
+ +++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
6
+ +++ export _CE_M=
7
+ +++ _CE_M=
8
+ +++ export _CE_CONDA=
9
+ +++ _CE_CONDA=
10
+ +++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
11
+ +++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
12
+ +++ '[' -z x ']'
13
+ ++ conda activate
14
+ ++ local cmd=activate
15
+ ++ case "$cmd" in
16
+ ++ __conda_activate activate
17
+ ++ '[' -n '' ']'
18
+ ++ local ask_conda
19
+ +++ PS1=
20
+ +++ __conda_exe shell.posix activate
21
+ +++ '[' -n '' ']'
22
+ +++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate
23
+ ++ ask_conda='unset _CE_M
24
+ unset _CE_CONDA
25
+ PS1='\''(base) '\''
26
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
27
+ export CONDA_SHLVL='\''1'\''
28
+ export CONDA_PROMPT_MODIFIER='\''(base) '\''
29
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
30
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
31
+ ++ eval 'unset _CE_M
32
+ unset _CE_CONDA
33
+ PS1='\''(base) '\''
34
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
35
+ export CONDA_SHLVL='\''1'\''
36
+ export CONDA_PROMPT_MODIFIER='\''(base) '\''
37
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
38
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
39
+ +++ unset _CE_M
40
+ +++ unset _CE_CONDA
41
+ +++ PS1='(base) '
42
+ +++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
43
+ +++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
44
+ +++ export CONDA_SHLVL=1
45
+ +++ CONDA_SHLVL=1
46
+ +++ export 'CONDA_PROMPT_MODIFIER=(base) '
47
+ +++ CONDA_PROMPT_MODIFIER='(base) '
48
+ +++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
49
+ +++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
50
+ +++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
51
+ +++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
52
+ ++ __conda_hashr
53
+ ++ '[' -n '' ']'
54
+ ++ '[' -n '' ']'
55
+ ++ hash -r
56
+ + conda activate junda-attnserver
57
+ + local cmd=activate
58
+ + case "$cmd" in
59
+ + __conda_activate activate junda-attnserver
60
+ + '[' -n '' ']'
61
+ + local ask_conda
62
+ ++ PS1='(base) '
63
+ ++ __conda_exe shell.posix activate junda-attnserver
64
+ ++ '[' -n '' ']'
65
+ ++ /mnt/weka/home/hao.zhang/conda/miniconda/bin/conda shell.posix activate junda-attnserver
66
+ + ask_conda='unset _CE_M
67
+ unset _CE_CONDA
68
+ PS1='\''(junda-attnserver) '\''
69
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
70
+ export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
71
+ export CONDA_SHLVL='\''2'\''
72
+ export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
73
+ export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
74
+ export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
75
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
76
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
77
+ + eval 'unset _CE_M
78
+ unset _CE_CONDA
79
+ PS1='\''(junda-attnserver) '\''
80
+ export PATH='\''/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'\''
81
+ export CONDA_PREFIX='\''/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver'\''
82
+ export CONDA_SHLVL='\''2'\''
83
+ export CONDA_DEFAULT_ENV='\''junda-attnserver'\''
84
+ export CONDA_PROMPT_MODIFIER='\''(junda-attnserver) '\''
85
+ export CONDA_PREFIX_1='\''/mnt/weka/home/hao.zhang/conda/miniconda'\''
86
+ export CONDA_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda'\''
87
+ export CONDA_PYTHON_EXE='\''/mnt/weka/home/hao.zhang/conda/miniconda/bin/python'\'''
88
+ ++ unset _CE_M
89
+ ++ unset _CE_CONDA
90
+ ++ PS1='(junda-attnserver) '
91
+ ++ export PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
92
+ ++ PATH=/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/.local/bin:/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin:/mnt/weka/home/hao.zhang/conda/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
93
+ ++ export CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
94
+ ++ CONDA_PREFIX=/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver
95
+ ++ export CONDA_SHLVL=2
96
+ ++ CONDA_SHLVL=2
97
+ ++ export CONDA_DEFAULT_ENV=junda-attnserver
98
+ ++ CONDA_DEFAULT_ENV=junda-attnserver
99
+ ++ export 'CONDA_PROMPT_MODIFIER=(junda-attnserver) '
100
+ ++ CONDA_PROMPT_MODIFIER='(junda-attnserver) '
101
+ ++ export CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
102
+ ++ CONDA_PREFIX_1=/mnt/weka/home/hao.zhang/conda/miniconda
103
+ ++ export CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
104
+ ++ CONDA_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/conda
105
+ ++ export CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
106
+ ++ CONDA_PYTHON_EXE=/mnt/weka/home/hao.zhang/conda/miniconda/bin/python
107
+ + __conda_hashr
108
+ + '[' -n '' ']'
109
+ + '[' -n '' ']'
110
+ + hash -r
111
+ + export CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
112
+ + CHROME_TRACE_PREFIX=/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
113
+ + mkdir -p /mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5
114
+ + export PROF_TP_SIZE=8
115
+ + PROF_TP_SIZE=8
116
+ + export PROF_CP_SIZE=8
117
+ + PROF_CP_SIZE=8
118
+ + export PROF_BS=32
119
+ + PROF_BS=32
120
+ + for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
121
+ + export PROF_CTX_LENGTH=1024
122
+ + PROF_CTX_LENGTH=1024
123
+ + name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp8.cp8.bs32.json'
124
+ + '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L1024*tp8.cp8.bs32.json' ']'
125
+ + echo 'Running ctx_length=1024, TP_SIZE=8, CP_SIZE=8, BATCH_SIZE=32'
126
+ + srun bash ./attnserver.sh
127
+ + which python3
128
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 8 --node_rank 0 --rdzv_id 343194 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-020:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
129
+ + which python3
130
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 8 --node_rank 3 --rdzv_id 343194 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-020:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
131
+ + which python3
132
+ + which python3
133
+ + which python3
134
+ + which python3
135
+ + which python3
136
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 8 --node_rank 2 --rdzv_id 343194 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-020:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
137
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 8 --node_rank 1 --rdzv_id 343194 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-020:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
138
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 8 --node_rank 4 --rdzv_id 343194 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-020:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
139
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 8 --node_rank 5 --rdzv_id 343194 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-020:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
140
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 8 --node_rank 7 --rdzv_id 343194 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-020:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
141
+ + which python3
142
+ + python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 8 --node_rank 6 --rdzv_id 343194 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-020:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 8 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 1024 --max-position-embeddings 1024 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
143
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
144
+ and will be removed in future. Use torchrun.
145
+ Note that --use-env is set by default in torchrun.
146
+ If your script expects `--local-rank` argument to be set, please
147
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
148
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
149
+ further instructions
150
+
151
+ main()
152
+ W0621 20:51:40.649000 3442655 site-packages/torch/distributed/run.py:766]
153
+ W0621 20:51:40.649000 3442655 site-packages/torch/distributed/run.py:766] *****************************************
154
+ W0621 20:51:40.649000 3442655 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
155
+ W0621 20:51:40.649000 3442655 site-packages/torch/distributed/run.py:766] *****************************************
156
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
157
+ and will be removed in future. Use torchrun.
158
+ Note that --use-env is set by default in torchrun.
159
+ If your script expects `--local-rank` argument to be set, please
160
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
161
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
162
+ further instructions
163
+
164
+ main()
165
+ W0621 20:51:40.666000 1535303 site-packages/torch/distributed/run.py:766]
166
+ W0621 20:51:40.666000 1535303 site-packages/torch/distributed/run.py:766] *****************************************
167
+ W0621 20:51:40.666000 1535303 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
168
+ W0621 20:51:40.666000 1535303 site-packages/torch/distributed/run.py:766] *****************************************
169
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
170
+ and will be removed in future. Use torchrun.
171
+ Note that --use-env is set by default in torchrun.
172
+ If your script expects `--local-rank` argument to be set, please
173
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
174
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
175
+ further instructions
176
+
177
+ main()
178
+ W0621 20:51:40.681000 227295 site-packages/torch/distributed/run.py:766]
179
+ W0621 20:51:40.681000 227295 site-packages/torch/distributed/run.py:766] *****************************************
180
+ W0621 20:51:40.681000 227295 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
181
+ W0621 20:51:40.681000 227295 site-packages/torch/distributed/run.py:766] *****************************************
182
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
183
+ and will be removed in future. Use torchrun.
184
+ Note that --use-env is set by default in torchrun.
185
+ If your script expects `--local-rank` argument to be set, please
186
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
187
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
188
+ further instructions
189
+
190
+ main()
191
+ W0621 20:51:40.683000 3614183 site-packages/torch/distributed/run.py:766]
192
+ W0621 20:51:40.683000 3614183 site-packages/torch/distributed/run.py:766] *****************************************
193
+ W0621 20:51:40.683000 3614183 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
194
+ W0621 20:51:40.683000 3614183 site-packages/torch/distributed/run.py:766] *****************************************
195
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
196
+ and will be removed in future. Use torchrun.
197
+ Note that --use-env is set by default in torchrun.
198
+ If your script expects `--local-rank` argument to be set, please
199
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
200
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
201
+ further instructions
202
+
203
+ main()
204
+ W0621 20:51:40.696000 3431036 site-packages/torch/distributed/run.py:766]
205
+ W0621 20:51:40.696000 3431036 site-packages/torch/distributed/run.py:766] *****************************************
206
+ W0621 20:51:40.696000 3431036 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
207
+ W0621 20:51:40.696000 3431036 site-packages/torch/distributed/run.py:766] *****************************************
208
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
209
+ and will be removed in future. Use torchrun.
210
+ Note that --use-env is set by default in torchrun.
211
+ If your script expects `--local-rank` argument to be set, please
212
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
213
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
214
+ further instructions
215
+
216
+ main()
217
+ W0621 20:51:40.707000 3673599 site-packages/torch/distributed/run.py:766]
218
+ W0621 20:51:40.707000 3673599 site-packages/torch/distributed/run.py:766] *****************************************
219
+ W0621 20:51:40.707000 3673599 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
220
+ W0621 20:51:40.707000 3673599 site-packages/torch/distributed/run.py:766] *****************************************
221
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
222
+ and will be removed in future. Use torchrun.
223
+ Note that --use-env is set by default in torchrun.
224
+ If your script expects `--local-rank` argument to be set, please
225
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
226
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
227
+ further instructions
228
+
229
+ main()
230
+ W0621 20:51:40.729000 2039389 site-packages/torch/distributed/run.py:766]
231
+ W0621 20:51:40.729000 2039389 site-packages/torch/distributed/run.py:766] *****************************************
232
+ W0621 20:51:40.729000 2039389 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
233
+ W0621 20:51:40.729000 2039389 site-packages/torch/distributed/run.py:766] *****************************************
234
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
235
+ and will be removed in future. Use torchrun.
236
+ Note that --use-env is set by default in torchrun.
237
+ If your script expects `--local-rank` argument to be set, please
238
+ change it to read from `os.environ['LOCAL_RANK']` instead. See
239
+ https://pytorch.org/docs/stable/distributed.html#launch-utility for
240
+ further instructions
241
+
242
+ main()
243
+ W0621 20:51:40.779000 3429108 site-packages/torch/distributed/run.py:766]
244
+ W0621 20:51:40.779000 3429108 site-packages/torch/distributed/run.py:766] *****************************************
245
+ W0621 20:51:40.779000 3429108 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
246
+ W0621 20:51:40.779000 3429108 site-packages/torch/distributed/run.py:766] *****************************************
247
+ [rank2]:[W621 20:52:05.491128679 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
248
+ [rank10]:[W621 20:52:05.256206507 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 10] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
249
+ [rank42]:[W621 20:52:05.017065162 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 42] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
250
+ [rank26]:[W621 20:52:05.123788759 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 26] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
251
+ [rank18]:[W621 20:52:05.602335467 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 18] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
252
+ [rank50]:[W621 20:52:05.989172577 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 50] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
253
+ [rank34]:[W621 20:52:05.274193241 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 34] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
254
+ [rank49]:[W621 20:52:05.008732855 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 49] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
255
+ [rank1]:[W621 20:52:05.514231475 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
256
+ [rank41]:[W621 20:52:05.038778551 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 41] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
257
+ [rank7]:[W621 20:52:05.515528465 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
258
+ [rank55]:[W621 20:52:05.011029103 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 55] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
259
+ [rank15]:[W621 20:52:05.279993585 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 15] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
260
+ [rank9]:[W621 20:52:05.280114792 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 9] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
261
+ [rank31]:[W621 20:52:05.147159032 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 31] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
262
+ [rank25]:[W621 20:52:05.147311825 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 25] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
263
+ [rank33]:[W621 20:52:05.296554129 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 33] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
264
+ [rank23]:[W621 20:52:05.625662716 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 23] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
265
+ [rank4]:[W621 20:52:05.518897385 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
266
+ [rank39]:[W621 20:52:05.299324297 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 39] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
267
+ [rank17]:[W621 20:52:05.628181180 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 17] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
268
+ [rank12]:[W621 20:52:05.283728685 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 12] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
269
+ [rank52]:[W621 20:52:05.015135622 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 52] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
270
+ [rank36]:[W621 20:52:05.300129424 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 36] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
271
+ [rank20]:[W621 20:52:05.628944369 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 20] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
272
+ [rank28]:[W621 20:52:05.151439244 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 28] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
273
+ [rank44]:[W621 20:52:05.048543084 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 44] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
274
+ [rank47]:[W621 20:52:05.052269099 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 47] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
275
+ [rank58]:[W621 20:52:05.066458667 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 58] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
276
+ [rank57]:[W621 20:52:05.066492524 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 57] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
277
+ [rank63]:[W621 20:52:05.066512977 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 63] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
278
+ [rank60]:[W621 20:52:05.066521303 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 60] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
279
+ [rank48]:[W621 20:52:05.439391216 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 48] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
280
+ [rank8]:[W621 20:52:05.711547080 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
281
+ [rank16]:[W621 20:52:05.064020326 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 16] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
282
+ [rank32]:[W621 20:52:05.747082484 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 32] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
283
+ [rank24]:[W621 20:52:05.616432079 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 24] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
284
+ [rank56]:[W621 20:52:05.522936245 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 56] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
285
+ [rank40]:[W621 20:52:05.543818157 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 40] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
286
+ [rank62]:[W621 20:52:05.551859068 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 62] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
287
+ [rank6]:[W621 20:52:05.020647217 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
288
+ [rank22]:[W621 20:52:05.129119111 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 22] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
289
+ [rank14]:[W621 20:52:05.784791767 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 14] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
290
+ [rank30]:[W621 20:52:05.651944956 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 30] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
291
+ [rank54]:[W621 20:52:05.517224175 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 54] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
292
+ [rank46]:[W621 20:52:05.546992867 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 46] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
293
+ [rank38]:[W621 20:52:05.803933589 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 38] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
294
+ [rank0]:[W621 20:52:05.039881672 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
295
+ [rank5]:[W621 20:52:05.041566100 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
296
+ [rank3]:[W621 20:52:05.043080646 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
297
+ [rank21]:[W621 20:52:05.151879517 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 21] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
298
+ [rank29]:[W621 20:52:05.674001978 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 29] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
299
+ [rank59]:[W621 20:52:05.575613619 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 59] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
300
+ [rank61]:[W621 20:52:05.575789092 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 61] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
301
+ [rank11]:[W621 20:52:05.807570929 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 11] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
302
+ [rank45]:[W621 20:52:05.568188971 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 45] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
303
+ [rank19]:[W621 20:52:05.152690599 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 19] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
304
+ [rank43]:[W621 20:52:05.568592607 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 43] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
305
+ [rank13]:[W621 20:52:05.807951361 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 13] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
306
+ [rank27]:[W621 20:52:05.675314448 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 27] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
307
+ [rank35]:[W621 20:52:05.826175837 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 35] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
308
+ [rank37]:[W621 20:52:05.826219975 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 37] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
309
+ [rank53]:[W621 20:52:05.543290982 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 53] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
310
+ [rank51]:[W621 20:52:05.543465839 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 51] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
attnserver.run_attnserver.slurm.sh.343194.out.log ADDED
@@ -0,0 +1,656 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Running ctx_length=1024, TP_SIZE=8, CP_SIZE=8, BATCH_SIZE=32
2
+ Cleaning up checkpoint directory: gpt-checkpoint
3
+ --------------------------------
4
+ CTX_LENGTH: 1024
5
+ TP_SIZE: 8
6
+ CP_SIZE: 8
7
+ CHECKPOINT_PATH: gpt-checkpoint
8
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
9
+ --------------------------------
10
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
11
+ Cleaning up checkpoint directory: gpt-checkpoint
12
+ --------------------------------
13
+ CTX_LENGTH: 1024
14
+ TP_SIZE: 8
15
+ CP_SIZE: 8
16
+ CHECKPOINT_PATH: gpt-checkpoint
17
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
18
+ --------------------------------
19
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
20
+ Cleaning up checkpoint directory: gpt-checkpoint
21
+ Cleaning up checkpoint directory: gpt-checkpoint
22
+ Cleaning up checkpoint directory: gpt-checkpoint
23
+ Cleaning up checkpoint directory: gpt-checkpoint
24
+ --------------------------------
25
+ CTX_LENGTH: 1024
26
+ TP_SIZE: 8
27
+ CP_SIZE: 8
28
+ Cleaning up checkpoint directory: gpt-checkpoint
29
+ --------------------------------
30
+ CTX_LENGTH: 1024
31
+ TP_SIZE: 8
32
+ CP_SIZE: 8
33
+ CHECKPOINT_PATH: gpt-checkpoint
34
+ CHECKPOINT_PATH: gpt-checkpoint
35
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
36
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
37
+ --------------------------------
38
+ --------------------------------
39
+ --------------------------------
40
+ CTX_LENGTH: 1024
41
+ TP_SIZE: 8
42
+ CP_SIZE: 8
43
+ --------------------------------
44
+ CTX_LENGTH: 1024
45
+ TP_SIZE: 8
46
+ CP_SIZE: 8
47
+ CHECKPOINT_PATH: gpt-checkpoint
48
+ --------------------------------
49
+ CTX_LENGTH: 1024
50
+ TP_SIZE: 8
51
+ CP_SIZE: 8
52
+ CHECKPOINT_PATH: gpt-checkpoint
53
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
54
+ Cleaning up checkpoint directory: gpt-checkpoint
55
+ CHECKPOINT_PATH: gpt-checkpoint
56
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
57
+ --------------------------------
58
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
59
+ --------------------------------
60
+ --------------------------------
61
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
62
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
63
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
64
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
65
+ --------------------------------
66
+ CTX_LENGTH: 1024
67
+ TP_SIZE: 8
68
+ CP_SIZE: 8
69
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
70
+ CHECKPOINT_PATH: gpt-checkpoint
71
+ PWD: /mnt/weka/home/hao.zhang/junda/attnserver-megatron
72
+ --------------------------------
73
+ /mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/bin/python3
74
+ INFO:megatron.training.initialize:Setting logging level to 0
75
+ INFO:megatron.training.initialize:Setting logging level to 0
76
+ INFO:megatron.training.initialize:Setting logging level to 0
77
+ INFO:megatron.training.initialize:Setting logging level to 0
78
+ INFO:megatron.training.initialize:Setting logging level to 0
79
+ INFO:megatron.training.initialize:Setting logging level to 0
80
+ INFO:megatron.training.initialize:Setting logging level to 0
81
+ INFO:megatron.training.initialize:Setting logging level to 0
82
+ INFO:megatron.training.initialize:Setting logging level to 0
83
+ using world size: 64, data-parallel size: 1, context-parallel size: 8, hierarchical context-parallel sizes: Nonetensor-model-parallel size: 8, encoder-tensor-model-parallel size: 0, pipeline-model-parallel size: 1, encoder-pipeline-model-parallel size: 0
84
+ Number of virtual stages per pipeline stage: None
85
+ WARNING: Setting args.check_for_nan_in_loss_and_grad to False since dynamic loss scaling is being used
86
+ using torch.float16 for parameters ...
87
+ ------------------------ arguments ------------------------
88
+ account_for_embedding_in_pipeline_split ......... False
89
+ account_for_loss_in_pipeline_split .............. False
90
+ accumulate_allreduce_grads_in_fp32 .............. False
91
+ adam_beta1 ...................................... 0.9
92
+ adam_beta2 ...................................... 0.999
93
+ adam_eps ........................................ 1e-08
94
+ add_bias_linear ................................. True
95
+ add_position_embedding .......................... True
96
+ add_qkv_bias .................................... True
97
+ adlr_autoresume ................................. False
98
+ adlr_autoresume_interval ........................ 1000
99
+ align_grad_reduce ............................... True
100
+ align_param_gather .............................. False
101
+ app_tag_run_name ................................ None
102
+ app_tag_run_version ............................. 0.0.0
103
+ apply_layernorm_1p .............................. False
104
+ apply_query_key_layer_scaling ................... False
105
+ apply_residual_connection_post_layernorm ........ False
106
+ apply_rope_fusion ............................... False
107
+ async_save ...................................... None
108
+ async_tensor_model_parallel_allreduce ........... True
109
+ attention_backend ............................... AttnBackend.auto
110
+ attention_dropout ............................... 0.1
111
+ attention_softmax_in_fp32 ....................... False
112
+ auto_detect_ckpt_format ......................... False
113
+ barrier_with_L1_time ............................ True
114
+ bert_binary_head ................................ True
115
+ bert_embedder_type .............................. megatron
116
+ bert_load ....................................... None
117
+ bf16 ............................................ False
118
+ bias_dropout_fusion ............................. True
119
+ bias_gelu_fusion ................................ True
120
+ bias_swiglu_fusion .............................. True
121
+ biencoder_projection_dim ........................ 0
122
+ biencoder_shared_query_context_model ............ False
123
+ block_data_path ................................. None
124
+ calc_ft_timeouts ................................ False
125
+ calculate_per_token_loss ........................ False
126
+ check_for_large_grads ........................... False
127
+ check_for_nan_in_loss_and_grad .................. False
128
+ check_for_spiky_loss ............................ False
129
+ check_weight_hash_across_dp_replicas_interval ... None
130
+ ckpt_assume_constant_structure .................. False
131
+ ckpt_convert_format ............................. None
132
+ ckpt_convert_save ............................... None
133
+ ckpt_convert_update_legacy_dist_opt_format ...... False
134
+ ckpt_format ..................................... torch_dist
135
+ ckpt_fully_parallel_load ........................ False
136
+ ckpt_fully_parallel_save ........................ True
137
+ ckpt_fully_parallel_save_deprecated ............. False
138
+ ckpt_step ....................................... None
139
+ classes_fraction ................................ 1.0
140
+ clip_grad ....................................... 1.0
141
+ clone_scatter_output_in_embedding ............... True
142
+ config_logger_dir ...............................
143
+ consumed_train_samples .......................... 0
144
+ consumed_valid_samples .......................... 0
145
+ context_parallel_size ........................... 8
146
+ cp_comm_type .................................... ['p2p']
147
+ create_attention_mask_in_dataloader ............. True
148
+ cross_entropy_fusion_impl ....................... native
149
+ cross_entropy_loss_fusion ....................... False
150
+ cuda_graph_scope ................................ full
151
+ cuda_graph_warmup_steps ......................... 3
152
+ data_args_path .................................. None
153
+ data_cache_path ................................. None
154
+ data_parallel_random_init ....................... False
155
+ data_parallel_sharding_strategy ................. no_shard
156
+ data_parallel_size .............................. 1
157
+ data_path ....................................... None
158
+ data_per_class_fraction ......................... 1.0
159
+ data_sharding ................................... True
160
+ dataloader_type ................................. single
161
+ ddp_average_in_collective ....................... False
162
+ ddp_bucket_size ................................. None
163
+ ddp_num_buckets ................................. None
164
+ ddp_pad_buckets_for_high_nccl_busbw ............. False
165
+ decoder_first_pipeline_num_layers ............... None
166
+ decoder_last_pipeline_num_layers ................ None
167
+ decoder_num_layers .............................. None
168
+ decoder_seq_length .............................. None
169
+ decoupled_lr .................................... None
170
+ decoupled_min_lr ................................ None
171
+ decrease_batch_size_if_needed ................... False
172
+ defer_embedding_wgrad_compute ................... False
173
+ deprecated_use_mcore_models ..................... False
174
+ deterministic_mode .............................. False
175
+ dino_bottleneck_size ............................ 256
176
+ dino_freeze_last_layer .......................... 1
177
+ dino_head_hidden_size ........................... 2048
178
+ dino_local_crops_number ......................... 10
179
+ dino_local_img_size ............................. 96
180
+ dino_norm_last_layer ............................ False
181
+ dino_teacher_temp ............................... 0.07
182
+ dino_warmup_teacher_temp ........................ 0.04
183
+ dino_warmup_teacher_temp_epochs ................. 30
184
+ disable_bf16_reduced_precision_matmul ........... False
185
+ disable_mamba_mem_eff_path ...................... False
186
+ disable_straggler_on_startup .................... False
187
+ dist_ckpt_format_deprecated ..................... None
188
+ dist_ckpt_strictness ............................ assume_ok_unexpected
189
+ distribute_saved_activations .................... False
190
+ distributed_backend ............................. nccl
191
+ distributed_timeout_minutes ..................... 10
192
+ embedding_path .................................. None
193
+ empty_unused_memory_level ....................... 0
194
+ enable_cuda_graph ............................... False
195
+ enable_ft_package ............................... False
196
+ enable_gloo_process_groups ...................... True
197
+ enable_msc ...................................... True
198
+ enable_one_logger ............................... True
199
+ encoder_num_layers .............................. 2
200
+ encoder_pipeline_model_parallel_size ............ 0
201
+ encoder_seq_length .............................. 1024
202
+ encoder_tensor_model_parallel_size .............. 0
203
+ end_weight_decay ................................ 0.1
204
+ eod_mask_loss ................................... False
205
+ error_injection_rate ............................ 0
206
+ error_injection_type ............................ transient_error
207
+ eval_interval ................................... 16
208
+ eval_iters ...................................... 1
209
+ evidence_data_path .............................. None
210
+ exit_duration_in_mins ........................... None
211
+ exit_interval ................................... None
212
+ exit_on_missing_checkpoint ...................... False
213
+ exit_signal_handler ............................. False
214
+ exp_avg_dtype ................................... torch.float32
215
+ exp_avg_sq_dtype ................................ torch.float32
216
+ expert_model_parallel_size ...................... 1
217
+ expert_tensor_parallel_size ..................... 8
218
+ external_cuda_graph ............................. False
219
+ ffn_hidden_size ................................. 16384
220
+ finetune ........................................ False
221
+ first_last_layers_bf16 .......................... False
222
+ flash_decode .................................... False
223
+ fp16 ............................................ True
224
+ fp16_lm_cross_entropy ........................... False
225
+ fp32_residual_connection ........................ False
226
+ fp8 ............................................. None
227
+ fp8_amax_compute_algo ........................... most_recent
228
+ fp8_amax_history_len ............................ 1
229
+ fp8_interval .................................... 1
230
+ fp8_margin ...................................... 0
231
+ fp8_param_gather ................................ False
232
+ fp8_recipe ...................................... delayed
233
+ fp8_wgrad ....................................... True
234
+ fsdp_double_buffer .............................. False
235
+ global_batch_size ............................... 1
236
+ grad_reduce_in_bf16 ............................. False
237
+ gradient_accumulation_fusion .................... True
238
+ gradient_reduce_div_fusion ...................... True
239
+ group_query_attention ........................... True
240
+ head_lr_mult .................................... 1.0
241
+ heterogeneous_layers_config_encoded_json ........ None
242
+ heterogeneous_layers_config_path ................ None
243
+ hidden_dropout .................................. 0.1
244
+ hidden_size ..................................... 4096
245
+ hierarchical_context_parallel_sizes ............. None
246
+ high_priority_stream_groups ..................... []
247
+ hybrid_attention_ratio .......................... 0.0
248
+ hybrid_mlp_ratio ................................ 0.0
249
+ hybrid_override_pattern ......................... None
250
+ hysteresis ...................................... 2
251
+ ict_head_size ................................... None
252
+ ict_load ........................................ None
253
+ img_h ........................................... 224
254
+ img_w ........................................... 224
255
+ indexer_batch_size .............................. 128
256
+ indexer_log_interval ............................ 1000
257
+ inference_batch_times_seqlen_threshold .......... -1
258
+ inference_dynamic_batching ...................... False
259
+ inference_dynamic_batching_buffer_guaranteed_fraction 0.2
260
+ inference_dynamic_batching_buffer_overflow_factor None
261
+ inference_dynamic_batching_buffer_size_gb ....... 40.0
262
+ inference_dynamic_batching_chunk_size ........... 256
263
+ inference_dynamic_batching_max_requests_override None
264
+ inference_dynamic_batching_max_tokens_override .. None
265
+ inference_max_batch_size ........................ 8
266
+ inference_max_seq_length ........................ 2560
267
+ inference_rng_tracker ........................... False
268
+ init_method_std ................................. 0.02
269
+ init_method_xavier_uniform ...................... False
270
+ init_model_with_meta_device ..................... False
271
+ initial_loss_scale .............................. 4294967296
272
+ inprocess_active_world_size ..................... 64
273
+ inprocess_barrier_timeout ....................... 120
274
+ inprocess_completion_timeout .................... 120
275
+ inprocess_empty_cuda_cache ...................... False
276
+ inprocess_granularity ........................... node
277
+ inprocess_hard_timeout .......................... 90
278
+ inprocess_heartbeat_interval .................... 30
279
+ inprocess_heartbeat_timeout ..................... 60
280
+ inprocess_last_call_wait ........................ 1
281
+ inprocess_max_iterations ........................ None
282
+ inprocess_monitor_process_interval .............. 1.0
283
+ inprocess_monitor_thread_interval ............... 1.0
284
+ inprocess_progress_watchdog_interval ............ 1.0
285
+ inprocess_restart ............................... False
286
+ inprocess_soft_timeout .......................... 60
287
+ inprocess_termination_grace_time ................ 1
288
+ is_hybrid_model ................................. False
289
+ iter_per_epoch .................................. 1250
290
+ iterations_to_skip .............................. []
291
+ keep_fp8_transpose_cache_when_using_custom_fsdp . False
292
+ kv_channels ..................................... 64
293
+ kv_lora_rank .................................... 32
294
+ lazy_mpu_init ................................... None
295
+ load ............................................ gpt-checkpoint
296
+ load_model_opt_format ........................... False
297
+ local_rank ...................................... 0
298
+ log_interval .................................... 1
299
+ log_loss_scale_to_tensorboard ................... True
300
+ log_memory_to_tensorboard ....................... False
301
+ log_num_zeros_in_grad ........................... False
302
+ log_params_norm ................................. False
303
+ log_progress .................................... False
304
+ log_straggler ................................... False
305
+ log_throughput .................................. False
306
+ log_timers_to_tensorboard ....................... False
307
+ log_validation_ppl_to_tensorboard ............... False
308
+ log_world_size_to_tensorboard ................... False
309
+ logging_level ................................... 0
310
+ loss_scale ...................................... None
311
+ loss_scale_window ............................... 1000
312
+ lr .............................................. 0.0005
313
+ lr_decay_iters .................................. 150000
314
+ lr_decay_samples ................................ None
315
+ lr_decay_style .................................. cosine
316
+ lr_warmup_fraction .............................. None
317
+ lr_warmup_init .................................. 0.0
318
+ lr_warmup_iters ................................. 2
319
+ lr_warmup_samples ............................... 0
320
+ lr_wsd_decay_iters .............................. None
321
+ lr_wsd_decay_samples ............................ None
322
+ lr_wsd_decay_style .............................. exponential
323
+ main_grads_dtype ................................ torch.float32
324
+ main_params_dtype ............................... torch.float32
325
+ make_vocab_size_divisible_by .................... 128
326
+ mamba_head_dim .................................. 64
327
+ mamba_num_groups ................................ 8
328
+ mamba_num_heads ................................. None
329
+ mamba_state_dim ................................. 128
330
+ manual_gc ....................................... False
331
+ manual_gc_eval .................................. True
332
+ manual_gc_interval .............................. 0
333
+ mask_factor ..................................... 1.0
334
+ mask_prob ....................................... 0.15
335
+ mask_type ....................................... random
336
+ masked_softmax_fusion ........................... True
337
+ max_position_embeddings ......................... 1024
338
+ max_tokens_to_oom ............................... 12000
339
+ memory_snapshot_path ............................ snapshot.pickle
340
+ merge_file ...................................... merges.txt
341
+ micro_batch_size ................................ 1
342
+ microbatch_group_size_per_vp_stage .............. None
343
+ mid_level_dataset_surplus ....................... 0.005
344
+ min_loss_scale .................................. 1.0
345
+ min_lr .......................................... 0.0
346
+ mlp_chunks_for_prefill .......................... 1
347
+ mmap_bin_files .................................. True
348
+ mock_data ....................................... True
349
+ moe_apply_probs_on_input ........................ False
350
+ moe_aux_loss_coeff .............................. 0.0
351
+ moe_enable_deepep ............................... False
352
+ moe_expert_capacity_factor ...................... None
353
+ moe_extended_tp ................................. False
354
+ moe_ffn_hidden_size ............................. None
355
+ moe_grouped_gemm ................................ False
356
+ moe_input_jitter_eps ............................ None
357
+ moe_layer_freq .................................. 1
358
+ moe_layer_recompute ............................. False
359
+ moe_pad_expert_input_to_capacity ................ False
360
+ moe_per_layer_logging ........................... False
361
+ moe_permute_fusion .............................. False
362
+ moe_router_bias_update_rate ..................... 0.001
363
+ moe_router_dtype ................................ None
364
+ moe_router_enable_expert_bias ................... False
365
+ moe_router_force_load_balancing ................. False
366
+ moe_router_group_topk ........................... None
367
+ moe_router_load_balancing_type .................. aux_loss
368
+ moe_router_num_groups ........................... None
369
+ moe_router_padding_for_fp8 ...................... False
370
+ moe_router_pre_softmax .......................... False
371
+ moe_router_score_function ....................... softmax
372
+ moe_router_topk ................................. 2
373
+ moe_router_topk_scaling_factor .................. None
374
+ moe_shared_expert_intermediate_size ............. None
375
+ moe_shared_expert_overlap ....................... False
376
+ moe_token_dispatcher_type ....................... allgather
377
+ moe_token_drop_policy ........................... probs
378
+ moe_use_legacy_grouped_gemm ..................... False
379
+ moe_use_upcycling ............................... False
380
+ moe_z_loss_coeff ................................ None
381
+ mrope_section ................................... None
382
+ mscale .......................................... 1.0
383
+ mscale_all_dim .................................. 1.0
384
+ mtp_loss_scaling_factor ......................... 0.1
385
+ mtp_num_layers .................................. None
386
+ multi_latent_attention .......................... False
387
+ nccl_all_reduce_for_prefill ..................... False
388
+ nccl_communicator_config_path ................... None
389
+ nccl_ub ......................................... False
390
+ no_load_optim ................................... None
391
+ no_load_rng ..................................... None
392
+ no_persist_layer_norm ........................... False
393
+ no_rope_freq .................................... None
394
+ no_save_optim ................................... None
395
+ no_save_rng ..................................... None
396
+ non_persistent_ckpt_type ........................ None
397
+ non_persistent_global_ckpt_dir .................. None
398
+ non_persistent_local_ckpt_algo .................. fully_parallel
399
+ non_persistent_local_ckpt_dir ................... None
400
+ non_persistent_save_interval .................... None
401
+ norm_epsilon .................................... 1e-05
402
+ normalization ................................... LayerNorm
403
+ num_attention_heads ............................. 64
404
+ num_channels .................................... 3
405
+ num_classes ..................................... 1000
406
+ num_dataset_builder_threads ..................... 1
407
+ num_distributed_optimizer_instances ............. 1
408
+ num_experts ..................................... None
409
+ num_layers ...................................... 2
410
+ num_layers_at_end_in_bf16 ....................... 1
411
+ num_layers_at_start_in_bf16 ..................... 1
412
+ num_layers_per_virtual_pipeline_stage ........... None
413
+ num_query_groups ................................ 16
414
+ num_virtual_stages_per_pipeline_rank ............ None
415
+ num_workers ..................................... 2
416
+ object_storage_cache_path ....................... None
417
+ one_logger_async ................................ False
418
+ one_logger_project .............................. megatron-lm
419
+ one_logger_run_name ............................. None
420
+ onnx_safe ....................................... None
421
+ openai_gelu ..................................... False
422
+ optimizer ....................................... adam
423
+ optimizer_cpu_offload ........................... False
424
+ optimizer_offload_fraction ...................... 1.0
425
+ output_bert_embeddings .......................... False
426
+ overlap_cpu_optimizer_d2h_h2d ................... False
427
+ overlap_grad_reduce ............................. False
428
+ overlap_p2p_comm ................................ False
429
+ overlap_p2p_comm_warmup_flush ................... False
430
+ overlap_param_gather ............................ False
431
+ overlap_param_gather_with_optimizer_step ........ False
432
+ override_opt_param_scheduler .................... False
433
+ params_dtype .................................... torch.float16
434
+ patch_dim ....................................... 16
435
+ per_split_data_args_path ........................ None
436
+ perform_initialization .......................... True
437
+ pin_cpu_grads ................................... True
438
+ pin_cpu_params .................................. True
439
+ pipeline_model_parallel_comm_backend ............ None
440
+ pipeline_model_parallel_size .................... 1
441
+ pipeline_model_parallel_split_rank .............. None
442
+ position_embedding_type ......................... learned_absolute
443
+ pretrained_checkpoint ........................... None
444
+ profile ......................................... False
445
+ profile_ranks ................................... [0]
446
+ profile_step_end ................................ 12
447
+ profile_step_start .............................. 10
448
+ q_lora_rank ..................................... None
449
+ qk_head_dim ..................................... 128
450
+ qk_l2_norm ...................................... False
451
+ qk_layernorm .................................... False
452
+ qk_pos_emb_head_dim ............................. 64
453
+ query_in_block_prob ............................. 0.1
454
+ rampup_batch_size ............................... None
455
+ rank ............................................ 0
456
+ recompute_granularity ........................... None
457
+ recompute_method ................................ None
458
+ recompute_modules ............................... None
459
+ recompute_num_layers ............................ None
460
+ record_memory_history ........................... False
461
+ relative_attention_max_distance ................. 128
462
+ relative_attention_num_buckets .................. 32
463
+ replication ..................................... False
464
+ replication_factor .............................. 2
465
+ replication_jump ................................ None
466
+ rerun_mode ...................................... disabled
467
+ reset_attention_mask ............................ False
468
+ reset_position_ids .............................. False
469
+ result_rejected_tracker_filename ................ None
470
+ retriever_report_topk_accuracies ................ []
471
+ retriever_score_scaling ......................... False
472
+ retriever_seq_length ............................ 256
473
+ retro_add_retriever ............................. False
474
+ retro_attention_gate ............................ 1
475
+ retro_cyclic_train_iters ........................ None
476
+ retro_encoder_attention_dropout ................. 0.1
477
+ retro_encoder_hidden_dropout .................... 0.1
478
+ retro_encoder_layers ............................ 2
479
+ retro_num_neighbors ............................. 2
480
+ retro_num_retrieved_chunks ...................... 2
481
+ retro_project_dir ............................... None
482
+ retro_verify_neighbor_count ..................... True
483
+ rope_scaling_factor ............................. 8.0
484
+ rotary_base ..................................... 10000
485
+ rotary_interleaved .............................. False
486
+ rotary_percent .................................. 1.0
487
+ rotary_scaling_factor ........................... 1.0
488
+ rotary_seq_len_interpolation_factor ............. None
489
+ run_workload_inspector_server ................... False
490
+ sample_rate ..................................... 1.0
491
+ save ............................................ gpt-checkpoint
492
+ save_interval ................................... 16
493
+ scatter_gather_tensors_in_pipeline .............. True
494
+ seed ............................................ 1234
495
+ seq_length ...................................... 1024
496
+ sequence_parallel ............................... False
497
+ sgd_momentum .................................... 0.9
498
+ short_seq_prob .................................. 0.1
499
+ skip_train ...................................... False
500
+ skipped_train_samples ........................... 0
501
+ spec ............................................ None
502
+ split ........................................... None
503
+ squared_relu .................................... False
504
+ start_weight_decay .............................. 0.1
505
+ straggler_ctrlr_port ............................ 65535
506
+ straggler_minmax_count .......................... 1
507
+ suggested_communication_unit_size ............... None
508
+ swiglu .......................................... False
509
+ swin_backbone_type .............................. tiny
510
+ symmetric_ar_type ............................... None
511
+ te_rng_tracker .................................. False
512
+ tensor_model_parallel_size ...................... 8
513
+ tensorboard_dir ................................. tensorboard-logs/
514
+ tensorboard_log_interval ........................ 1
515
+ tensorboard_queue_size .......................... 1000
516
+ test_data_path .................................. None
517
+ test_mode ....................................... False
518
+ tiktoken_num_special_tokens ..................... 1000
519
+ tiktoken_pattern ................................ None
520
+ tiktoken_special_tokens ......................... None
521
+ timing_log_level ................................ 0
522
+ timing_log_option ............................... minmax
523
+ titles_data_path ................................ None
524
+ tokenizer_model ................................. None
525
+ tokenizer_type .................................. GPT2BPETokenizer
526
+ torch_fsdp2_reshard_after_forward ............... True
527
+ tp_comm_bootstrap_backend ....................... nccl
528
+ tp_comm_bulk_dgrad .............................. True
529
+ tp_comm_bulk_wgrad .............................. True
530
+ tp_comm_overlap ................................. False
531
+ tp_comm_overlap_ag .............................. True
532
+ tp_comm_overlap_cfg ............................. None
533
+ tp_comm_overlap_rs .............................. True
534
+ tp_comm_overlap_rs_dgrad ........................ False
535
+ tp_comm_split_ag ................................ True
536
+ tp_comm_split_rs ................................ True
537
+ train_data_path ................................. None
538
+ train_iters ..................................... 10
539
+ train_samples ................................... None
540
+ train_sync_interval ............................. None
541
+ transformer_impl ................................ transformer_engine
542
+ transformer_pipeline_model_parallel_size ........ 1
543
+ untie_embeddings_and_output_weights ............. False
544
+ use_checkpoint_args ............................. False
545
+ use_checkpoint_opt_param_scheduler .............. False
546
+ use_cpu_initialization .......................... None
547
+ use_custom_fsdp ................................. False
548
+ use_dist_ckpt ................................... True
549
+ use_dist_ckpt_deprecated ........................ False
550
+ use_distributed_optimizer ....................... False
551
+ use_flash_attn .................................. False
552
+ use_legacy_models ............................... False
553
+ use_mp_args_from_checkpoint_args ................ False
554
+ use_one_sent_docs ............................... False
555
+ use_persistent_ckpt_worker ...................... False
556
+ use_precision_aware_optimizer ................... False
557
+ use_pytorch_profiler ............................ False
558
+ use_ring_exchange_p2p ........................... False
559
+ use_rope_scaling ................................ False
560
+ use_rotary_position_embeddings .................. False
561
+ use_sharp ....................................... False
562
+ use_tokenizer_model_from_checkpoint_args ........ True
563
+ use_torch_fsdp2 ................................. False
564
+ use_torch_optimizer_for_cpu_offload ............. False
565
+ use_tp_pp_dp_mapping ............................ False
566
+ v_head_dim ...................................... 128
567
+ valid_data_path ................................. None
568
+ variable_seq_lengths ............................ False
569
+ virtual_pipeline_model_parallel_size ............ None
570
+ vision_backbone_type ............................ vit
571
+ vision_pretraining .............................. False
572
+ vision_pretraining_type ......................... classify
573
+ vocab_extra_ids ................................. 0
574
+ vocab_file ...................................... vocab.json
575
+ vocab_size ...................................... None
576
+ wandb_exp_name ..................................
577
+ wandb_project ...................................
578
+ wandb_save_dir ..................................
579
+ weight_decay .................................... 0.1
580
+ weight_decay_incr_style ......................... constant
581
+ wgrad_deferral_limit ............................ 0
582
+ world_size ...................................... 64
583
+ yaml_cfg ........................................ None
584
+ -------------------- end of arguments ---------------------
585
+ INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
586
+ > building GPT2BPETokenizer tokenizer ...
587
+ INFO:megatron.training.initialize:Setting logging level to 0
588
+ INFO:megatron.training.initialize:Setting logging level to 0
589
+ INFO:megatron.training.initialize:Setting logging level to 0
590
+ INFO:megatron.training.initialize:Setting logging level to 0
591
+ INFO:megatron.training.initialize:Setting logging level to 0
592
+ INFO:megatron.training.initialize:Setting logging level to 0
593
+ INFO:megatron.training.initialize:Setting logging level to 0
594
+ > padded vocab (size: 50257) with 943 dummy tokens (new size: 51200)
595
+ INFO:megatron.training.initialize:Setting logging level to 0
596
+ INFO:megatron.training.initialize:Setting logging level to 0
597
+ WARNING:megatron.core.rerun_state_machine:RerunStateMachine initialized in mode RerunMode.DISABLED
598
+ > initializing torch distributed ...
599
+ INFO:megatron.training.initialize:Setting logging level to 0
600
+ INFO:megatron.training.initialize:Setting logging level to 0
601
+ INFO:megatron.training.initialize:Setting logging level to 0
602
+ INFO:megatron.training.initialize:Setting logging level to 0
603
+ INFO:megatron.training.initialize:Setting logging level to 0
604
+ INFO:megatron.training.initialize:Setting logging level to 0
605
+ INFO:megatron.training.initialize:Setting logging level to 0
606
+ INFO:megatron.training.initialize:Setting logging level to 0
607
+ INFO:megatron.training.initialize:Setting logging level to 0
608
+ INFO:megatron.training.initialize:Setting logging level to 0
609
+ INFO:megatron.training.initialize:Setting logging level to 0
610
+ INFO:megatron.training.initialize:Setting logging level to 0
611
+ INFO:megatron.training.initialize:Setting logging level to 0
612
+ INFO:megatron.training.initialize:Setting logging level to 0
613
+ INFO:megatron.training.initialize:Setting logging level to 0
614
+ INFO:megatron.training.initialize:Setting logging level to 0
615
+ INFO:megatron.training.initialize:Setting logging level to 0
616
+ INFO:megatron.training.initialize:Setting logging level to 0
617
+ INFO:megatron.training.initialize:Setting logging level to 0
618
+ INFO:megatron.training.initialize:Setting logging level to 0
619
+ INFO:megatron.training.initialize:Setting logging level to 0
620
+ INFO:megatron.training.initialize:Setting logging level to 0
621
+ INFO:megatron.training.initialize:Setting logging level to 0
622
+ WARNING: TensorBoard writing requested but is not available (are you using PyTorch 1.1.0 or later?), no TensorBoard logs will be written.
623
+ WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
624
+ INFO:megatron.training.initialize:Setting logging level to 0
625
+ INFO:megatron.training.initialize:Setting logging level to 0
626
+ INFO:megatron.training.initialize:Setting logging level to 0
627
+ INFO:megatron.training.initialize:Setting logging level to 0
628
+ INFO:megatron.training.initialize:Setting logging level to 0
629
+ INFO:megatron.training.initialize:Setting logging level to 0
630
+ INFO:megatron.training.initialize:Setting logging level to 0
631
+ INFO:megatron.training.initialize:Setting logging level to 0
632
+ INFO:megatron.training.initialize:Setting logging level to 0
633
+ INFO:megatron.training.initialize:Setting logging level to 0
634
+ INFO:megatron.training.initialize:Setting logging level to 0
635
+ INFO:megatron.training.initialize:Setting logging level to 0
636
+ INFO:megatron.training.initialize:Setting logging level to 0
637
+ INFO:megatron.training.initialize:Setting logging level to 0
638
+ INFO:megatron.training.initialize:Setting logging level to 0
639
+ INFO:megatron.training.initialize:Setting logging level to 0
640
+ INFO:megatron.training.initialize:Setting logging level to 0
641
+ INFO:megatron.training.initialize:Setting logging level to 0
642
+ INFO:megatron.training.initialize:Setting logging level to 0
643
+ INFO:megatron.training.initialize:Setting logging level to 0
644
+ INFO:megatron.training.initialize:Setting logging level to 0
645
+ INFO:megatron.training.initialize:Setting logging level to 0
646
+ INFO:megatron.training.initialize:Setting logging level to 0
647
+ > initialized tensor model parallel with size 8
648
+ > initialized pipeline model parallel with size 1
649
+ > setting random seeds to 1234 ...
650
+ > compiling dataset index builder ...
651
+ make: Entering directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
652
+ make: Nothing to be done for 'default'.
653
+ make: Leaving directory '/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/datasets'
654
+ >>> done with dataset index builder. Compilation time: 0.048 seconds
655
+ > compiling and loading fused kernels ...
656
+ >>> done with compiling and loading fused kernels. Compilation time: 7.243 seconds
attnserver.run_attnserver.slurm.sh.343195.out.log CHANGED
@@ -62605,3 +62605,346 @@ batch tensor after cp: labels torch.Size([1, 24576])
62605
  batch tensor after cp: loss_mask torch.Size([1, 24576])
62606
  batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62607
  batch tensor after cp: position_ids torch.Size([1, 24576])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62605
  batch tensor after cp: loss_mask torch.Size([1, 24576])
62606
  batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62607
  batch tensor after cp: position_ids torch.Size([1, 24576])
62608
+ batch tensor: tokens torch.Size([1, 98304])
62609
+ batch tensor: labels torch.Size([1, 98304])
62610
+ batch tensor: loss_mask torch.Size([1, 98304])
62611
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62612
+ batch tensor: position_ids torch.Size([1, 98304])
62613
+ batch tensor after cp: tokens torch.Size([1, 24576])
62614
+ batch tensor after cp: labels torch.Size([1, 24576])
62615
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62616
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62617
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62618
+ batch tensor: tokens torch.Size([1, 98304])
62619
+ batch tensor: labels torch.Size([1, 98304])
62620
+ batch tensor: loss_mask torch.Size([1, 98304])
62621
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62622
+ batch tensor: position_ids torch.Size([1, 98304])
62623
+ batch tensor after cp: tokens torch.Size([1, 24576])
62624
+ batch tensor after cp: labels torch.Size([1, 24576])
62625
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62626
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62627
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62628
+ batch tensor: tokens torch.Size([1, 98304])
62629
+ batch tensor: labels torch.Size([1, 98304])
62630
+ batch tensor: loss_mask torch.Size([1, 98304])
62631
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62632
+ batch tensor: position_ids torch.Size([1, 98304])
62633
+ batch tensor after cp: tokens torch.Size([1, 24576])
62634
+ batch tensor after cp: labels torch.Size([1, 24576])
62635
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62636
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62637
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62638
+ Start exporting trace 3
62639
+ Done exporting trace 3
62640
+ [2025-06-21 20:50:53] iteration 4/ 10 | consumed samples: 4 | elapsed time per iteration (ms): 174836.8 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 536870912.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
62641
+ batch tensor: tokens torch.Size([1, 98304])
62642
+ batch tensor: labels torch.Size([1, 98304])
62643
+ batch tensor: loss_mask torch.Size([1, 98304])
62644
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62645
+ batch tensor: position_ids torch.Size([1, 98304])
62646
+ batch tensor after cp: tokens torch.Size([1, 24576])
62647
+ batch tensor after cp: labels torch.Size([1, 24576])
62648
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62649
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62650
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62651
+ batch tensor: tokens torch.Size([1, 98304])
62652
+ batch tensor: labels torch.Size([1, 98304])
62653
+ batch tensor: loss_mask torch.Size([1, 98304])
62654
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62655
+ batch tensor: position_ids torch.Size([1, 98304])
62656
+ batch tensor after cp: tokens torch.Size([1, 24576])
62657
+ batch tensor after cp: labels torch.Size([1, 24576])
62658
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62659
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62660
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62661
+ batch tensor: tokens torch.Size([1, 98304])
62662
+ batch tensor: labels torch.Size([1, 98304])
62663
+ batch tensor: loss_mask torch.Size([1, 98304])
62664
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62665
+ batch tensor: position_ids torch.Size([1, 98304])
62666
+ batch tensor after cp: tokens torch.Size([1, 24576])
62667
+ batch tensor after cp: labels torch.Size([1, 24576])
62668
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62669
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62670
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62671
+ batch tensor: tokens torch.Size([1, 98304])
62672
+ batch tensor: labels torch.Size([1, 98304])
62673
+ batch tensor: loss_mask torch.Size([1, 98304])
62674
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62675
+ batch tensor: position_ids torch.Size([1, 98304])
62676
+ batch tensor after cp: tokens torch.Size([1, 24576])
62677
+ batch tensor after cp: labels torch.Size([1, 24576])
62678
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62679
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62680
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62681
+ batch tensor: tokens torch.Size([1, 98304])
62682
+ batch tensor: labels torch.Size([1, 98304])
62683
+ batch tensor: loss_mask torch.Size([1, 98304])
62684
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62685
+ batch tensor: position_ids torch.Size([1, 98304])
62686
+ batch tensor after cp: tokens torch.Size([1, 24576])
62687
+ batch tensor after cp: labels torch.Size([1, 24576])
62688
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62689
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62690
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62691
+ batch tensor: tokens torch.Size([1, 98304])
62692
+ batch tensor: labels torch.Size([1, 98304])
62693
+ batch tensor: loss_mask torch.Size([1, 98304])
62694
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62695
+ batch tensor: position_ids torch.Size([1, 98304])
62696
+ batch tensor after cp: tokens torch.Size([1, 24576])
62697
+ batch tensor after cp: labels torch.Size([1, 24576])
62698
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62699
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62700
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62701
+ batch tensor: tokens torch.Size([1, 98304])
62702
+ batch tensor: labels torch.Size([1, 98304])
62703
+ batch tensor: loss_mask torch.Size([1, 98304])
62704
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62705
+ batch tensor: position_ids torch.Size([1, 98304])
62706
+ batch tensor after cp: tokens torch.Size([1, 24576])
62707
+ batch tensor after cp: labels torch.Size([1, 24576])
62708
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62709
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62710
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62711
+ batch tensor: tokens torch.Size([1, 98304])
62712
+ batch tensor: labels torch.Size([1, 98304])
62713
+ batch tensor: loss_mask torch.Size([1, 98304])
62714
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62715
+ batch tensor: position_ids torch.Size([1, 98304])
62716
+ batch tensor after cp: tokens torch.Size([1, 24576])
62717
+ batch tensor after cp: labels torch.Size([1, 24576])
62718
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62719
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62720
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62721
+ batch tensor: tokens torch.Size([1, 98304])
62722
+ batch tensor: labels torch.Size([1, 98304])
62723
+ batch tensor: loss_mask torch.Size([1, 98304])
62724
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62725
+ batch tensor: position_ids torch.Size([1, 98304])
62726
+ batch tensor after cp: tokens torch.Size([1, 24576])
62727
+ batch tensor after cp: labels torch.Size([1, 24576])
62728
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62729
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62730
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62731
+ batch tensor: tokens torch.Size([1, 98304])
62732
+ batch tensor: labels torch.Size([1, 98304])
62733
+ batch tensor: loss_mask torch.Size([1, 98304])
62734
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62735
+ batch tensor: position_ids torch.Size([1, 98304])
62736
+ batch tensor after cp: tokens torch.Size([1, 24576])
62737
+ batch tensor after cp: labels torch.Size([1, 24576])
62738
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62739
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62740
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62741
+ batch tensor: tokens torch.Size([1, 98304])
62742
+ batch tensor: labels torch.Size([1, 98304])
62743
+ batch tensor: loss_mask torch.Size([1, 98304])
62744
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62745
+ batch tensor: position_ids torch.Size([1, 98304])
62746
+ batch tensor after cp: tokens torch.Size([1, 24576])
62747
+ batch tensor after cp: labels torch.Size([1, 24576])
62748
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62749
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62750
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62751
+ batch tensor: tokens torch.Size([1, 98304])
62752
+ batch tensor: labels torch.Size([1, 98304])
62753
+ batch tensor: loss_mask torch.Size([1, 98304])
62754
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62755
+ batch tensor: position_ids torch.Size([1, 98304])
62756
+ batch tensor after cp: tokens torch.Size([1, 24576])
62757
+ batch tensor after cp: labels torch.Size([1, 24576])
62758
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62759
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62760
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62761
+ batch tensor: tokens torch.Size([1, 98304])
62762
+ batch tensor: labels torch.Size([1, 98304])
62763
+ batch tensor: loss_mask torch.Size([1, 98304])
62764
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62765
+ batch tensor: position_ids torch.Size([1, 98304])
62766
+ batch tensor after cp: tokens torch.Size([1, 24576])
62767
+ batch tensor after cp: labels torch.Size([1, 24576])
62768
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62769
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62770
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62771
+ batch tensor: tokens torch.Size([1, 98304])
62772
+ batch tensor: labels torch.Size([1, 98304])
62773
+ batch tensor: loss_mask torch.Size([1, 98304])
62774
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62775
+ batch tensor: position_ids torch.Size([1, 98304])
62776
+ batch tensor after cp: tokens torch.Size([1, 24576])
62777
+ batch tensor after cp: labels torch.Size([1, 24576])
62778
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62779
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62780
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62781
+ batch tensor: tokens torch.Size([1, 98304])
62782
+ batch tensor: labels torch.Size([1, 98304])
62783
+ batch tensor: loss_mask torch.Size([1, 98304])
62784
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62785
+ batch tensor: position_ids torch.Size([1, 98304])
62786
+ batch tensor after cp: tokens torch.Size([1, 24576])
62787
+ batch tensor after cp: labels torch.Size([1, 24576])
62788
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62789
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62790
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62791
+ batch tensor: tokens torch.Size([1, 98304])
62792
+ batch tensor: labels torch.Size([1, 98304])
62793
+ batch tensor: loss_mask torch.Size([1, 98304])
62794
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62795
+ batch tensor: position_ids torch.Size([1, 98304])
62796
+ batch tensor after cp: tokens torch.Size([1, 24576])
62797
+ batch tensor after cp: labels torch.Size([1, 24576])
62798
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62799
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62800
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62801
+ batch tensor: tokens torch.Size([1, 98304])
62802
+ batch tensor: labels torch.Size([1, 98304])
62803
+ batch tensor: loss_mask torch.Size([1, 98304])
62804
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62805
+ batch tensor: position_ids torch.Size([1, 98304])
62806
+ batch tensor after cp: tokens torch.Size([1, 24576])
62807
+ batch tensor after cp: labels torch.Size([1, 24576])
62808
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62809
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62810
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62811
+ batch tensor: tokens torch.Size([1, 98304])
62812
+ batch tensor: labels torch.Size([1, 98304])
62813
+ batch tensor: loss_mask torch.Size([1, 98304])
62814
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62815
+ batch tensor: position_ids torch.Size([1, 98304])
62816
+ batch tensor after cp: tokens torch.Size([1, 24576])
62817
+ batch tensor after cp: labels torch.Size([1, 24576])
62818
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62819
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62820
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62821
+ batch tensor: tokens torch.Size([1, 98304])
62822
+ batch tensor: labels torch.Size([1, 98304])
62823
+ batch tensor: loss_mask torch.Size([1, 98304])
62824
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62825
+ batch tensor: position_ids torch.Size([1, 98304])
62826
+ batch tensor after cp: tokens torch.Size([1, 24576])
62827
+ batch tensor after cp: labels torch.Size([1, 24576])
62828
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62829
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62830
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62831
+ batch tensor: tokens torch.Size([1, 98304])
62832
+ batch tensor: labels torch.Size([1, 98304])
62833
+ batch tensor: loss_mask torch.Size([1, 98304])
62834
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62835
+ batch tensor: position_ids torch.Size([1, 98304])
62836
+ batch tensor after cp: tokens torch.Size([1, 24576])
62837
+ batch tensor after cp: labels torch.Size([1, 24576])
62838
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62839
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62840
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62841
+ batch tensor: tokens torch.Size([1, 98304])
62842
+ batch tensor: labels torch.Size([1, 98304])
62843
+ batch tensor: loss_mask torch.Size([1, 98304])
62844
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62845
+ batch tensor: position_ids torch.Size([1, 98304])
62846
+ batch tensor after cp: tokens torch.Size([1, 24576])
62847
+ batch tensor after cp: labels torch.Size([1, 24576])
62848
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62849
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62850
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62851
+ batch tensor: tokens torch.Size([1, 98304])
62852
+ batch tensor: labels torch.Size([1, 98304])
62853
+ batch tensor: loss_mask torch.Size([1, 98304])
62854
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62855
+ batch tensor: position_ids torch.Size([1, 98304])
62856
+ batch tensor after cp: tokens torch.Size([1, 24576])
62857
+ batch tensor after cp: labels torch.Size([1, 24576])
62858
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62859
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62860
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62861
+ batch tensor: tokens torch.Size([1, 98304])
62862
+ batch tensor: labels torch.Size([1, 98304])
62863
+ batch tensor: loss_mask torch.Size([1, 98304])
62864
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62865
+ batch tensor: position_ids torch.Size([1, 98304])
62866
+ batch tensor after cp: tokens torch.Size([1, 24576])
62867
+ batch tensor after cp: labels torch.Size([1, 24576])
62868
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62869
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62870
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62871
+ batch tensor: tokens torch.Size([1, 98304])
62872
+ batch tensor: labels torch.Size([1, 98304])
62873
+ batch tensor: loss_mask torch.Size([1, 98304])
62874
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62875
+ batch tensor: position_ids torch.Size([1, 98304])
62876
+ batch tensor after cp: tokens torch.Size([1, 24576])
62877
+ batch tensor after cp: labels torch.Size([1, 24576])
62878
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62879
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62880
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62881
+ batch tensor: tokens torch.Size([1, 98304])
62882
+ batch tensor: labels torch.Size([1, 98304])
62883
+ batch tensor: loss_mask torch.Size([1, 98304])
62884
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62885
+ batch tensor: position_ids torch.Size([1, 98304])
62886
+ batch tensor after cp: tokens torch.Size([1, 24576])
62887
+ batch tensor after cp: labels torch.Size([1, 24576])
62888
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62889
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62890
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62891
+ batch tensor: tokens torch.Size([1, 98304])
62892
+ batch tensor: labels torch.Size([1, 98304])
62893
+ batch tensor: loss_mask torch.Size([1, 98304])
62894
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62895
+ batch tensor: position_ids torch.Size([1, 98304])
62896
+ batch tensor after cp: tokens torch.Size([1, 24576])
62897
+ batch tensor after cp: labels torch.Size([1, 24576])
62898
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62899
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62900
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62901
+ batch tensor: tokens torch.Size([1, 98304])
62902
+ batch tensor: labels torch.Size([1, 98304])
62903
+ batch tensor: loss_mask torch.Size([1, 98304])
62904
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62905
+ batch tensor: position_ids torch.Size([1, 98304])
62906
+ batch tensor after cp: tokens torch.Size([1, 24576])
62907
+ batch tensor after cp: labels torch.Size([1, 24576])
62908
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62909
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62910
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62911
+ batch tensor: tokens torch.Size([1, 98304])
62912
+ batch tensor: labels torch.Size([1, 98304])
62913
+ batch tensor: loss_mask torch.Size([1, 98304])
62914
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62915
+ batch tensor: position_ids torch.Size([1, 98304])
62916
+ batch tensor after cp: tokens torch.Size([1, 24576])
62917
+ batch tensor after cp: labels torch.Size([1, 24576])
62918
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62919
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62920
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62921
+ batch tensor: tokens torch.Size([1, 98304])
62922
+ batch tensor: labels torch.Size([1, 98304])
62923
+ batch tensor: loss_mask torch.Size([1, 98304])
62924
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62925
+ batch tensor: position_ids torch.Size([1, 98304])
62926
+ batch tensor after cp: tokens torch.Size([1, 24576])
62927
+ batch tensor after cp: labels torch.Size([1, 24576])
62928
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62929
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62930
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62931
+ batch tensor: tokens torch.Size([1, 98304])
62932
+ batch tensor: labels torch.Size([1, 98304])
62933
+ batch tensor: loss_mask torch.Size([1, 98304])
62934
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62935
+ batch tensor: position_ids torch.Size([1, 98304])
62936
+ batch tensor after cp: tokens torch.Size([1, 24576])
62937
+ batch tensor after cp: labels torch.Size([1, 24576])
62938
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62939
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62940
+ batch tensor after cp: position_ids torch.Size([1, 24576])
62941
+ batch tensor: tokens torch.Size([1, 98304])
62942
+ batch tensor: labels torch.Size([1, 98304])
62943
+ batch tensor: loss_mask torch.Size([1, 98304])
62944
+ batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
62945
+ batch tensor: position_ids torch.Size([1, 98304])
62946
+ batch tensor after cp: tokens torch.Size([1, 24576])
62947
+ batch tensor after cp: labels torch.Size([1, 24576])
62948
+ batch tensor after cp: loss_mask torch.Size([1, 24576])
62949
+ batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
62950
+ batch tensor after cp: position_ids torch.Size([1, 24576])
attnserver.run_attnserver.slurm.sh.343201.out.log CHANGED
@@ -37374,3 +37374,319 @@ batch tensor after cp: labels torch.Size([1, 65536])
37374
  batch tensor after cp: loss_mask torch.Size([1, 65536])
37375
  batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37376
  batch tensor after cp: position_ids torch.Size([1, 65536])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37374
  batch tensor after cp: loss_mask torch.Size([1, 65536])
37375
  batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37376
  batch tensor after cp: position_ids torch.Size([1, 65536])
37377
+ Start exporting trace 4
37378
+ Done exporting trace 4
37379
+ [2025-06-21 20:48:49] iteration 5/ 10 | consumed samples: 5 | elapsed time per iteration (ms): 151207.3 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 268435456.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
37380
+ batch tensor: tokens torch.Size([1, 131072])
37381
+ batch tensor: labels torch.Size([1, 131072])
37382
+ batch tensor: loss_mask torch.Size([1, 131072])
37383
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37384
+ batch tensor: position_ids torch.Size([1, 131072])
37385
+ batch tensor after cp: tokens torch.Size([1, 65536])
37386
+ batch tensor after cp: labels torch.Size([1, 65536])
37387
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37388
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37389
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37390
+ batch tensor: tokens torch.Size([1, 131072])
37391
+ batch tensor: labels torch.Size([1, 131072])
37392
+ batch tensor: loss_mask torch.Size([1, 131072])
37393
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37394
+ batch tensor: position_ids torch.Size([1, 131072])
37395
+ batch tensor after cp: tokens torch.Size([1, 65536])
37396
+ batch tensor after cp: labels torch.Size([1, 65536])
37397
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37398
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37399
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37400
+ batch tensor: tokens torch.Size([1, 131072])
37401
+ batch tensor: labels torch.Size([1, 131072])
37402
+ batch tensor: loss_mask torch.Size([1, 131072])
37403
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37404
+ batch tensor: position_ids torch.Size([1, 131072])
37405
+ batch tensor after cp: tokens torch.Size([1, 65536])
37406
+ batch tensor after cp: labels torch.Size([1, 65536])
37407
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37408
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37409
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37410
+ batch tensor: tokens torch.Size([1, 131072])
37411
+ batch tensor: labels torch.Size([1, 131072])
37412
+ batch tensor: loss_mask torch.Size([1, 131072])
37413
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37414
+ batch tensor: position_ids torch.Size([1, 131072])
37415
+ batch tensor after cp: tokens torch.Size([1, 65536])
37416
+ batch tensor after cp: labels torch.Size([1, 65536])
37417
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37418
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37419
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37420
+ batch tensor: tokens torch.Size([1, 131072])
37421
+ batch tensor: labels torch.Size([1, 131072])
37422
+ batch tensor: loss_mask torch.Size([1, 131072])
37423
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37424
+ batch tensor: position_ids torch.Size([1, 131072])
37425
+ batch tensor after cp: tokens torch.Size([1, 65536])
37426
+ batch tensor after cp: labels torch.Size([1, 65536])
37427
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37428
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37429
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37430
+ batch tensor: tokens torch.Size([1, 131072])
37431
+ batch tensor: labels torch.Size([1, 131072])
37432
+ batch tensor: loss_mask torch.Size([1, 131072])
37433
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37434
+ batch tensor: position_ids torch.Size([1, 131072])
37435
+ batch tensor after cp: tokens torch.Size([1, 65536])
37436
+ batch tensor after cp: labels torch.Size([1, 65536])
37437
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37438
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37439
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37440
+ batch tensor: tokens torch.Size([1, 131072])
37441
+ batch tensor: labels torch.Size([1, 131072])
37442
+ batch tensor: loss_mask torch.Size([1, 131072])
37443
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37444
+ batch tensor: position_ids torch.Size([1, 131072])
37445
+ batch tensor after cp: tokens torch.Size([1, 65536])
37446
+ batch tensor after cp: labels torch.Size([1, 65536])
37447
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37448
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37449
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37450
+ batch tensor: tokens torch.Size([1, 131072])
37451
+ batch tensor: labels torch.Size([1, 131072])
37452
+ batch tensor: loss_mask torch.Size([1, 131072])
37453
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37454
+ batch tensor: position_ids torch.Size([1, 131072])
37455
+ batch tensor after cp: tokens torch.Size([1, 65536])
37456
+ batch tensor after cp: labels torch.Size([1, 65536])
37457
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37458
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37459
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37460
+ batch tensor: tokens torch.Size([1, 131072])
37461
+ batch tensor: labels torch.Size([1, 131072])
37462
+ batch tensor: loss_mask torch.Size([1, 131072])
37463
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37464
+ batch tensor: position_ids torch.Size([1, 131072])
37465
+ batch tensor after cp: tokens torch.Size([1, 65536])
37466
+ batch tensor after cp: labels torch.Size([1, 65536])
37467
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37468
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37469
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37470
+ batch tensor: tokens torch.Size([1, 131072])
37471
+ batch tensor: labels torch.Size([1, 131072])
37472
+ batch tensor: loss_mask torch.Size([1, 131072])
37473
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37474
+ batch tensor: position_ids torch.Size([1, 131072])
37475
+ batch tensor after cp: tokens torch.Size([1, 65536])
37476
+ batch tensor after cp: labels torch.Size([1, 65536])
37477
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37478
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37479
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37480
+ batch tensor: tokens torch.Size([1, 131072])
37481
+ batch tensor: labels torch.Size([1, 131072])
37482
+ batch tensor: loss_mask torch.Size([1, 131072])
37483
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37484
+ batch tensor: position_ids torch.Size([1, 131072])
37485
+ batch tensor after cp: tokens torch.Size([1, 65536])
37486
+ batch tensor after cp: labels torch.Size([1, 65536])
37487
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37488
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37489
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37490
+ batch tensor: tokens torch.Size([1, 131072])
37491
+ batch tensor: labels torch.Size([1, 131072])
37492
+ batch tensor: loss_mask torch.Size([1, 131072])
37493
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37494
+ batch tensor: position_ids torch.Size([1, 131072])
37495
+ batch tensor after cp: tokens torch.Size([1, 65536])
37496
+ batch tensor after cp: labels torch.Size([1, 65536])
37497
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37498
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37499
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37500
+ batch tensor: tokens torch.Size([1, 131072])
37501
+ batch tensor: labels torch.Size([1, 131072])
37502
+ batch tensor: loss_mask torch.Size([1, 131072])
37503
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37504
+ batch tensor: position_ids torch.Size([1, 131072])
37505
+ batch tensor after cp: tokens torch.Size([1, 65536])
37506
+ batch tensor after cp: labels torch.Size([1, 65536])
37507
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37508
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37509
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37510
+ batch tensor: tokens torch.Size([1, 131072])
37511
+ batch tensor: labels torch.Size([1, 131072])
37512
+ batch tensor: loss_mask torch.Size([1, 131072])
37513
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37514
+ batch tensor: position_ids torch.Size([1, 131072])
37515
+ batch tensor after cp: tokens torch.Size([1, 65536])
37516
+ batch tensor after cp: labels torch.Size([1, 65536])
37517
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37518
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37519
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37520
+ batch tensor: tokens torch.Size([1, 131072])
37521
+ batch tensor: labels torch.Size([1, 131072])
37522
+ batch tensor: loss_mask torch.Size([1, 131072])
37523
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37524
+ batch tensor: position_ids torch.Size([1, 131072])
37525
+ batch tensor after cp: tokens torch.Size([1, 65536])
37526
+ batch tensor after cp: labels torch.Size([1, 65536])
37527
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37528
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37529
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37530
+ batch tensor: tokens torch.Size([1, 131072])
37531
+ batch tensor: labels torch.Size([1, 131072])
37532
+ batch tensor: loss_mask torch.Size([1, 131072])
37533
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37534
+ batch tensor: position_ids torch.Size([1, 131072])
37535
+ batch tensor after cp: tokens torch.Size([1, 65536])
37536
+ batch tensor after cp: labels torch.Size([1, 65536])
37537
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37538
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37539
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37540
+ Start exporting trace 5
37541
+ Done exporting trace 5
37542
+ [2025-06-21 20:51:16] iteration 6/ 10 | consumed samples: 6 | elapsed time per iteration (ms): 146841.4 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 134217728.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
37543
+ batch tensor: tokens torch.Size([1, 131072])
37544
+ batch tensor: labels torch.Size([1, 131072])
37545
+ batch tensor: loss_mask torch.Size([1, 131072])
37546
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37547
+ batch tensor: position_ids torch.Size([1, 131072])
37548
+ batch tensor after cp: tokens torch.Size([1, 65536])
37549
+ batch tensor after cp: labels torch.Size([1, 65536])
37550
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37551
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37552
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37553
+ batch tensor: tokens torch.Size([1, 131072])
37554
+ batch tensor: labels torch.Size([1, 131072])
37555
+ batch tensor: loss_mask torch.Size([1, 131072])
37556
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37557
+ batch tensor: position_ids torch.Size([1, 131072])
37558
+ batch tensor after cp: tokens torch.Size([1, 65536])
37559
+ batch tensor after cp: labels torch.Size([1, 65536])
37560
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37561
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37562
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37563
+ batch tensor: tokens torch.Size([1, 131072])
37564
+ batch tensor: labels torch.Size([1, 131072])
37565
+ batch tensor: loss_mask torch.Size([1, 131072])
37566
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37567
+ batch tensor: position_ids torch.Size([1, 131072])
37568
+ batch tensor after cp: tokens torch.Size([1, 65536])
37569
+ batch tensor after cp: labels torch.Size([1, 65536])
37570
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37571
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37572
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37573
+ batch tensor: tokens torch.Size([1, 131072])
37574
+ batch tensor: labels torch.Size([1, 131072])
37575
+ batch tensor: loss_mask torch.Size([1, 131072])
37576
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37577
+ batch tensor: position_ids torch.Size([1, 131072])
37578
+ batch tensor after cp: tokens torch.Size([1, 65536])
37579
+ batch tensor after cp: labels torch.Size([1, 65536])
37580
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37581
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37582
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37583
+ batch tensor: tokens torch.Size([1, 131072])
37584
+ batch tensor: labels torch.Size([1, 131072])
37585
+ batch tensor: loss_mask torch.Size([1, 131072])
37586
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37587
+ batch tensor: position_ids torch.Size([1, 131072])
37588
+ batch tensor after cp: tokens torch.Size([1, 65536])
37589
+ batch tensor after cp: labels torch.Size([1, 65536])
37590
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37591
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37592
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37593
+ batch tensor: tokens torch.Size([1, 131072])
37594
+ batch tensor: labels torch.Size([1, 131072])
37595
+ batch tensor: loss_mask torch.Size([1, 131072])
37596
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37597
+ batch tensor: position_ids torch.Size([1, 131072])
37598
+ batch tensor after cp: tokens torch.Size([1, 65536])
37599
+ batch tensor after cp: labels torch.Size([1, 65536])
37600
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37601
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37602
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37603
+ batch tensor: tokens torch.Size([1, 131072])
37604
+ batch tensor: labels torch.Size([1, 131072])
37605
+ batch tensor: loss_mask torch.Size([1, 131072])
37606
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37607
+ batch tensor: position_ids torch.Size([1, 131072])
37608
+ batch tensor after cp: tokens torch.Size([1, 65536])
37609
+ batch tensor after cp: labels torch.Size([1, 65536])
37610
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37611
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37612
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37613
+ batch tensor: tokens torch.Size([1, 131072])
37614
+ batch tensor: labels torch.Size([1, 131072])
37615
+ batch tensor: loss_mask torch.Size([1, 131072])
37616
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37617
+ batch tensor: position_ids torch.Size([1, 131072])
37618
+ batch tensor after cp: tokens torch.Size([1, 65536])
37619
+ batch tensor after cp: labels torch.Size([1, 65536])
37620
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37621
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37622
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37623
+ batch tensor: tokens torch.Size([1, 131072])
37624
+ batch tensor: labels torch.Size([1, 131072])
37625
+ batch tensor: loss_mask torch.Size([1, 131072])
37626
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37627
+ batch tensor: position_ids torch.Size([1, 131072])
37628
+ batch tensor after cp: tokens torch.Size([1, 65536])
37629
+ batch tensor after cp: labels torch.Size([1, 65536])
37630
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37631
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37632
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37633
+ batch tensor: tokens torch.Size([1, 131072])
37634
+ batch tensor: labels torch.Size([1, 131072])
37635
+ batch tensor: loss_mask torch.Size([1, 131072])
37636
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37637
+ batch tensor: position_ids torch.Size([1, 131072])
37638
+ batch tensor after cp: tokens torch.Size([1, 65536])
37639
+ batch tensor after cp: labels torch.Size([1, 65536])
37640
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37641
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37642
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37643
+ batch tensor: tokens torch.Size([1, 131072])
37644
+ batch tensor: labels torch.Size([1, 131072])
37645
+ batch tensor: loss_mask torch.Size([1, 131072])
37646
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37647
+ batch tensor: position_ids torch.Size([1, 131072])
37648
+ batch tensor after cp: tokens torch.Size([1, 65536])
37649
+ batch tensor after cp: labels torch.Size([1, 65536])
37650
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37651
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37652
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37653
+ batch tensor: tokens torch.Size([1, 131072])
37654
+ batch tensor: labels torch.Size([1, 131072])
37655
+ batch tensor: loss_mask torch.Size([1, 131072])
37656
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37657
+ batch tensor: position_ids torch.Size([1, 131072])
37658
+ batch tensor after cp: tokens torch.Size([1, 65536])
37659
+ batch tensor after cp: labels torch.Size([1, 65536])
37660
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37661
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37662
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37663
+ batch tensor: tokens torch.Size([1, 131072])
37664
+ batch tensor: labels torch.Size([1, 131072])
37665
+ batch tensor: loss_mask torch.Size([1, 131072])
37666
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37667
+ batch tensor: position_ids torch.Size([1, 131072])
37668
+ batch tensor after cp: tokens torch.Size([1, 65536])
37669
+ batch tensor after cp: labels torch.Size([1, 65536])
37670
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37671
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37672
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37673
+ batch tensor: tokens torch.Size([1, 131072])
37674
+ batch tensor: labels torch.Size([1, 131072])
37675
+ batch tensor: loss_mask torch.Size([1, 131072])
37676
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37677
+ batch tensor: position_ids torch.Size([1, 131072])
37678
+ batch tensor after cp: tokens torch.Size([1, 65536])
37679
+ batch tensor after cp: labels torch.Size([1, 65536])
37680
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37681
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37682
+ batch tensor after cp: position_ids torch.Size([1, 65536])
37683
+ batch tensor: tokens torch.Size([1, 131072])
37684
+ batch tensor: labels torch.Size([1, 131072])
37685
+ batch tensor: loss_mask torch.Size([1, 131072])
37686
+ batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
37687
+ batch tensor: position_ids torch.Size([1, 131072])
37688
+ batch tensor after cp: tokens torch.Size([1, 65536])
37689
+ batch tensor after cp: labels torch.Size([1, 65536])
37690
+ batch tensor after cp: loss_mask torch.Size([1, 65536])
37691
+ batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
37692
+ batch tensor after cp: position_ids torch.Size([1, 65536])