./9712

This model is a fine-tuned version of openai/whisper-large-v3 on the 9712 FULL-2024-10-24 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3774
  • Wer Ortho: 21.1633
  • Wer: 15.3466

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-06
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 300
  • training_steps: 1400
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Ortho Wer
0.6939 0.3661 200 0.4724 26.4867 19.6596
0.5324 0.7323 400 0.4202 23.7327 17.6207
0.4623 1.0984 600 0.3970 21.8897 16.1424
0.4049 1.4645 800 0.3879 22.1227 16.3051
0.3962 1.8307 1000 0.3811 21.0907 15.3844
0.377 2.1968 1200 0.3787 21.3722 15.6109
0.3422 2.5629 1400 0.3774 21.1633 15.3466

Framework versions

  • Transformers 4.45.1
  • Pytorch 1.13.1+cu117
  • Datasets 3.0.1
  • Tokenizers 0.20.0
Downloads last month
5
Safetensors
Model size
1.61B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Makkoen/whisper-large-v3-cit-do01-wd0-lr3e-06-steps1400-FULL6

Finetuned
(386)
this model