metadata
license: apache-2.0
base_model: yashcode00/wav2vec2-large-xlsr-indian-language-classification-featureExtractor
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-large-xlsr-indian-language-classification-featureExtractor
results: []
wav2vec2-large-xlsr-indian-language-classification-featureExtractor
This model is a fine-tuned version of yashcode00/wav2vec2-large-xlsr-indian-language-classification-featureExtractor on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.2045
- Accuracy: 0.9484
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
0.0213 | 10.55 | 1000 | 0.2103 | 0.9460 |
0.0192 | 21.11 | 2000 | 0.1935 | 0.9480 |
0.0196 | 31.66 | 3000 | 0.2777 | 0.9278 |
0.014 | 42.22 | 4000 | 0.1927 | 0.9480 |
0.0141 | 52.77 | 5000 | 0.2184 | 0.9439 |
0.0106 | 63.32 | 6000 | 0.2401 | 0.9348 |
0.0112 | 73.88 | 7000 | 0.2206 | 0.9493 |
0.0085 | 84.43 | 8000 | 0.1907 | 0.9526 |
0.0079 | 94.99 | 9000 | 0.2052 | 0.9484 |
Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3