icefall-libri-giga-pruned-transducer-stateless7-streaming-2023-04-04
/
decoding-results
/modified_beam_search
/log-decode-epoch-99-avg-1-streaming-chunk-size-32-modified_beam_search-beam-size-4-2023-04-04-11-24-46
2023-04-04 11:24:46,056 INFO [decode.py:659] Decoding started | |
2023-04-04 11:24:46,057 INFO [decode.py:665] Device: cuda:0 | |
2023-04-04 11:24:46,059 INFO [decode.py:675] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.22', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '96c9a2aece2a3a7633da07740e24fa3d96f5498c', 'k2-git-date': 'Thu Nov 10 08:14:02 2022', 'lhotse-version': '1.13.0.dev+git.527d964.clean', 'torch-version': '1.12.1', 'torch-cuda-available': True, 'torch-cuda-version': '11.6', 'python-version': '3.8', 'icefall-git-branch': 'zipformer_libri_small_models', 'icefall-git-sha1': '0994afb-dirty', 'icefall-git-date': 'Tue Apr 4 10:59:02 2023', 'icefall-path': '/ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_small_models', 'k2-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_latest/lib/python3.8/site-packages/k2/__init__.py', 'lhotse-path': '/ceph-data4/yangxiaoyu/softwares/lhotse_development/lhotse_random_padding_left/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-1-1220091118-57c4d55446-mlpzc', 'IP address': '10.177.22.19'}, 'epoch': 99, 'iter': 0, 'avg': 1, 'use_averaged_model': False, 'exp_dir': PosixPath('pruned_transducer_stateless7_streaming_multi/exp'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'modified_beam_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'right_padding': 64, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'short_chunk_size': 50, 'num_left_chunks': 4, 'decode_chunk_len': 32, 'max_duration': 600, 'bucketing_sampler': True, 'num_buckets': 30, 'shuffle': True, 'return_cuts': True, 'num_workers': 2, 'on_the_fly_num_workers': 0, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'manifest_dir': PosixPath('data/fbank'), 'on_the_fly_feats': False, 'res_dir': PosixPath('pruned_transducer_stateless7_streaming_multi/exp/modified_beam_search'), 'suffix': 'epoch-99-avg-1-streaming-chunk-size-32-modified_beam_search-beam-size-4', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500} | |
2023-04-04 11:24:46,059 INFO [decode.py:677] About to create model | |
2023-04-04 11:24:46,636 INFO [zipformer.py:405] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8. | |
2023-04-04 11:24:46,649 INFO [checkpoint.py:112] Loading checkpoint from pruned_transducer_stateless7_streaming_multi/exp/epoch-99.pt | |
2023-04-04 11:24:48,580 INFO [decode.py:782] Number of model parameters: 70369391 | |
2023-04-04 11:24:48,581 INFO [librispeech.py:58] About to get test-clean cuts from data/fbank/librispeech_cuts_test-clean.jsonl.gz | |
2023-04-04 11:24:48,583 INFO [librispeech.py:63] About to get test-other cuts from data/fbank/librispeech_cuts_test-other.jsonl.gz | |
2023-04-04 11:24:56,104 INFO [decode.py:569] batch 0/?, cuts processed until now is 26 | |
2023-04-04 11:24:56,283 INFO [zipformer.py:2401] attn_weights_entropy = tensor([2.2086, 2.5602, 1.0595, 1.2567, 1.8033, 1.3124, 3.1108, 1.7953], | |
device='cuda:0'), covar=tensor([0.0590, 0.0440, 0.0670, 0.1174, 0.0564, 0.0939, 0.0278, 0.0607], | |
device='cuda:0'), in_proj_covar=tensor([0.0058, 0.0075, 0.0053, 0.0051, 0.0056, 0.0056, 0.0094, 0.0054], | |
device='cuda:0'), out_proj_covar=tensor([0.0008, 0.0010, 0.0007, 0.0008, 0.0008, 0.0008, 0.0012, 0.0007], | |
device='cuda:0') | |
2023-04-04 11:26:24,252 INFO [decode.py:569] batch 20/?, cuts processed until now is 1545 | |
2023-04-04 11:27:15,949 INFO [decode.py:569] batch 40/?, cuts processed until now is 2375 | |
2023-04-04 11:27:44,906 INFO [decode.py:583] The transcripts are stored in pruned_transducer_stateless7_streaming_multi/exp/modified_beam_search/recogs-test-clean-epoch-99-avg-1-streaming-chunk-size-32-modified_beam_search-beam-size-4.txt | |
2023-04-04 11:27:44,999 INFO [utils.py:558] [test-clean-beam_size_4] %WER 2.40% [1263 / 52576, 155 ins, 90 del, 1018 sub ] | |
2023-04-04 11:27:45,193 INFO [decode.py:594] Wrote detailed error stats to pruned_transducer_stateless7_streaming_multi/exp/modified_beam_search/errs-test-clean-epoch-99-avg-1-streaming-chunk-size-32-modified_beam_search-beam-size-4.txt | |
2023-04-04 11:27:45,195 INFO [decode.py:608] | |
For test-clean, WER of different settings are: | |
beam_size_4 2.4 best for test-clean | |
2023-04-04 11:27:50,691 INFO [decode.py:569] batch 0/?, cuts processed until now is 30 | |
2023-04-04 11:29:14,594 INFO [decode.py:569] batch 20/?, cuts processed until now is 1771 | |
2023-04-04 11:30:02,467 INFO [decode.py:569] batch 40/?, cuts processed until now is 2696 | |
2023-04-04 11:30:28,485 INFO [decode.py:583] The transcripts are stored in pruned_transducer_stateless7_streaming_multi/exp/modified_beam_search/recogs-test-other-epoch-99-avg-1-streaming-chunk-size-32-modified_beam_search-beam-size-4.txt | |
2023-04-04 11:30:28,583 INFO [utils.py:558] [test-other-beam_size_4] %WER 6.00% [3143 / 52343, 351 ins, 268 del, 2524 sub ] | |
2023-04-04 11:30:28,786 INFO [decode.py:594] Wrote detailed error stats to pruned_transducer_stateless7_streaming_multi/exp/modified_beam_search/errs-test-other-epoch-99-avg-1-streaming-chunk-size-32-modified_beam_search-beam-size-4.txt | |
2023-04-04 11:30:28,787 INFO [decode.py:608] | |
For test-other, WER of different settings are: | |
beam_size_4 6.0 best for test-other | |
2023-04-04 11:30:28,787 INFO [decode.py:814] Done! | |