--- library_name: transformers license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-tiny-minds14 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train args: en-US metrics: - name: Wer type: wer value: 0.4964580873671783 --- # whisper-tiny-minds14 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.9507 - Wer Ortho: 0.4855 - Wer: 0.4965 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 2000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-------:|:----:|:---------------:|:---------:|:------:| | 0.7253 | 1.7857 | 50 | 0.5916 | 0.3751 | 0.3583 | | 0.158 | 3.5714 | 100 | 0.6146 | 0.3399 | 0.3235 | | 0.0244 | 5.3571 | 150 | 0.6676 | 0.3307 | 0.3264 | | 0.0127 | 7.1429 | 200 | 0.6988 | 0.3251 | 0.3188 | | 0.0065 | 8.9286 | 250 | 0.7239 | 0.3350 | 0.3329 | | 0.0023 | 10.7143 | 300 | 0.7469 | 0.3344 | 0.3294 | | 0.002 | 12.5 | 350 | 0.7677 | 0.3257 | 0.3217 | | 0.0009 | 14.2857 | 400 | 0.7667 | 0.3208 | 0.3182 | | 0.0009 | 16.0714 | 450 | 0.8388 | 0.3405 | 0.3388 | | 0.0031 | 17.8571 | 500 | 0.7991 | 0.3313 | 0.3323 | | 0.0003 | 19.6429 | 550 | 0.8032 | 0.3436 | 0.3406 | | 0.0002 | 21.4286 | 600 | 0.8200 | 0.3418 | 0.3400 | | 0.001 | 23.2143 | 650 | 0.8118 | 0.3436 | 0.3406 | | 0.0005 | 25.0 | 700 | 0.8278 | 0.3344 | 0.3323 | | 0.0007 | 26.7857 | 750 | 0.8299 | 0.3356 | 0.3318 | | 0.0006 | 28.5714 | 800 | 0.8390 | 0.3344 | 0.3318 | | 0.0004 | 30.3571 | 850 | 0.8442 | 0.3350 | 0.3323 | | 0.0002 | 32.1429 | 900 | 0.8444 | 0.3307 | 0.3294 | | 0.0005 | 33.9286 | 950 | 0.8549 | 0.3344 | 0.3329 | | 0.0007 | 35.7143 | 1000 | 0.8515 | 0.3331 | 0.3329 | | 0.0003 | 37.5 | 1050 | 0.8571 | 0.3263 | 0.3264 | | 0.0005 | 39.2857 | 1100 | 0.8504 | 0.3307 | 0.3294 | | 0.0001 | 41.0714 | 1150 | 0.8654 | 0.3313 | 0.3318 | | 0.0005 | 42.8571 | 1200 | 0.8724 | 0.3337 | 0.3347 | | 0.0001 | 44.6429 | 1250 | 0.8806 | 0.3325 | 0.3341 | | 0.0001 | 46.4286 | 1300 | 0.8901 | 0.3344 | 0.3359 | | 0.0001 | 48.2143 | 1350 | 0.8941 | 0.3344 | 0.3359 | | 0.0001 | 50.0 | 1400 | 0.8987 | 0.3337 | 0.3353 | | 0.0 | 51.7857 | 1450 | 0.9018 | 0.3337 | 0.3359 | | 0.0 | 53.5714 | 1500 | 0.9073 | 0.3325 | 0.3353 | | 0.0 | 55.3571 | 1550 | 0.9106 | 0.3319 | 0.3347 | | 0.0 | 57.1429 | 1600 | 0.9152 | 0.3319 | 0.3347 | | 0.0 | 58.9286 | 1650 | 0.9198 | 0.4824 | 0.4917 | | 0.0 | 60.7143 | 1700 | 0.9242 | 0.4824 | 0.4923 | | 0.0 | 62.5 | 1750 | 0.9279 | 0.4849 | 0.4947 | | 0.0 | 64.2857 | 1800 | 0.9327 | 0.4855 | 0.4953 | | 0.0 | 66.0714 | 1850 | 0.9374 | 0.4849 | 0.4953 | | 0.0 | 67.8571 | 1900 | 0.9417 | 0.4855 | 0.4965 | | 0.0 | 69.6429 | 1950 | 0.9461 | 0.4855 | 0.4965 | | 0.0 | 71.4286 | 2000 | 0.9507 | 0.4855 | 0.4965 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1