wav2vec2-base-vi-vivos
This model is a fine-tuned version of nguyenvulebinh/wav2vec2-base-vi on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.3990
- Wer: 0.2339
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
2.2335 | 1.37 | 500 | 1.9337 | 0.9589 |
2.0861 | 2.74 | 1000 | 1.6892 | 0.9078 |
1.8044 | 4.11 | 1500 | 1.3953 | 0.7989 |
1.5782 | 5.48 | 2000 | 1.1773 | 0.7221 |
1.3843 | 6.85 | 2500 | 1.0011 | 0.6243 |
1.2181 | 8.22 | 3000 | 0.8656 | 0.5361 |
1.1115 | 9.59 | 3500 | 0.7775 | 0.4933 |
0.9948 | 10.96 | 4000 | 0.6933 | 0.4286 |
0.9307 | 12.33 | 4500 | 0.6314 | 0.3959 |
0.8529 | 13.7 | 5000 | 0.5832 | 0.3560 |
0.8094 | 15.07 | 5500 | 0.5446 | 0.3292 |
0.7517 | 16.44 | 6000 | 0.5156 | 0.3064 |
0.701 | 17.81 | 6500 | 0.4899 | 0.2907 |
0.6753 | 19.18 | 7000 | 0.4668 | 0.2742 |
0.6621 | 20.55 | 7500 | 0.4528 | 0.2621 |
0.6455 | 21.92 | 8000 | 0.4345 | 0.2564 |
0.6159 | 23.29 | 8500 | 0.4258 | 0.2475 |
0.596 | 24.66 | 9000 | 0.4143 | 0.2435 |
0.5833 | 26.03 | 9500 | 0.4063 | 0.2387 |
0.5899 | 27.4 | 10000 | 0.4029 | 0.2357 |
0.5729 | 28.77 | 10500 | 0.3990 | 0.2339 |
Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
- Downloads last month
- 77
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for hieplpvip/wav2vec2-base-vi-vivos
Base model
nguyenvulebinh/wav2vec2-base-vi