File size: 3,069 Bytes
25b1c79
a57c3db
048cdb2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3b11400
048cdb2
25b1c79
 
048cdb2
 
25b1c79
048cdb2
25b1c79
048cdb2
 
864bae1
3b11400
864bae1
 
 
 
25b1c79
048cdb2
25b1c79
048cdb2
25b1c79
048cdb2
25b1c79
048cdb2
25b1c79
048cdb2
25b1c79
048cdb2
25b1c79
048cdb2
25b1c79
048cdb2
25b1c79
048cdb2
 
 
3b11400
048cdb2
 
 
3b11400
 
25b1c79
048cdb2
25b1c79
3b11400
 
864bae1
 
 
 
 
 
 
 
 
 
25b1c79
 
048cdb2
25b1c79
048cdb2
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
library_name: transformers
language:
- en
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- wft
- whisper
- automatic-speech-recognition
- audio
- speech
- generated_from_trainer
datasets:
- hf-internal-testing/librispeech_asr_dummy
metrics:
- wer
model-index:
- name: wft-test-model
  results:
  - task:
      type: automatic-speech-recognition
      name: Automatic Speech Recognition
    dataset:
      name: hf-internal-testing/librispeech_asr_dummy
      type: hf-internal-testing/librispeech_asr_dummy
    metrics:
    - type: wer
      value: 4.724409448818897
      name: Wer
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# wft-test-model

This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the hf-internal-testing/librispeech_asr_dummy dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1248
- Wer: 4.7244
- Cer: 92.6847
- Decode Time: 0.5481
- Wer Time: 0.0069
- Cer Time: 0.0040

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 100

### Training results

| Training Loss | Epoch | Step | Validation Loss | Wer      | Cer      | Decode Time | Wer Time | Cer Time |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:--------:|:--------:|
| 2.4107        | 0.1   | 10   | 1.9892          | 303.5433 | 117.1875 | 0.5449      | 0.0307   | 0.0039   |
| 1.2109        | 1.01  | 20   | 1.1659          | 155.1181 | 91.2642  | 0.5278      | 0.0062   | 0.0036   |
| 0.8855        | 1.11  | 30   | 0.8104          | 30.7087  | 56.8182  | 0.4832      | 0.0069   | 0.0041   |
| 0.4367        | 2.02  | 40   | 0.6315          | 25.1969  | 74.5739  | 0.5295      | 0.0058   | 0.0034   |
| 0.4398        | 2.12  | 50   | 0.4566          | 17.3228  | 91.9744  | 0.6078      | 0.0055   | 0.0030   |
| 0.2291        | 3.03  | 60   | 0.3006          | 9.0551   | 100.7102 | 0.5659      | 0.0058   | 0.0031   |
| 0.2281        | 3.13  | 70   | 0.2144          | 7.4803   | 90.4830  | 0.5507      | 0.0046   | 0.0030   |
| 0.111         | 4.04  | 80   | 0.1736          | 5.9055   | 89.3466  | 0.6595      | 0.0063   | 0.0032   |
| 0.0695        | 4.14  | 90   | 0.1345          | 4.7244   | 87.9261  | 0.6369      | 0.0402   | 0.0182   |
| 0.0761        | 5.05  | 100  | 0.1248          | 4.7244   | 92.6847  | 0.5481      | 0.0069   | 0.0040   |


### Framework versions

- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.5.0
- Datasets 3.0.2
- Tokenizers 0.20.1