matildecs commited on
Commit
942bdf1
·
verified ·
1 Parent(s): 0ce64b8

End of training

Browse files
README.md CHANGED
@@ -18,8 +18,8 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.2758
22
- - Wer: 17.7832
23
 
24
  ## Model description
25
 
@@ -38,26 +38,27 @@ More information needed
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
- - learning_rate: 5e-05
42
  - train_batch_size: 16
43
  - eval_batch_size: 8
44
  - seed: 42
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 500
48
- - training_steps: 6000
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Wer |
54
  |:-------------:|:------:|:----:|:---------------:|:-------:|
55
- | 0.5575 | 0.0727 | 1000 | 0.6201 | 35.9145 |
56
- | 0.4618 | 0.1454 | 2000 | 0.5084 | 30.6525 |
57
- | 0.3571 | 0.2181 | 3000 | 0.4122 | 25.1587 |
58
- | 0.3845 | 0.2908 | 4000 | 0.3702 | 23.1700 |
59
- | 0.2471 | 0.3635 | 5000 | 0.3127 | 19.8575 |
60
- | 0.2415 | 0.4362 | 6000 | 0.2758 | 17.7832 |
 
61
 
62
 
63
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.2418
22
+ - Wer: 16.0707
23
 
24
  ## Model description
25
 
 
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
+ - learning_rate: 5e-06
42
  - train_batch_size: 16
43
  - eval_batch_size: 8
44
  - seed: 42
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 500
48
+ - training_steps: 7000
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Wer |
54
  |:-------------:|:------:|:----:|:---------------:|:-------:|
55
+ | 0.2962 | 0.0727 | 1000 | 0.3163 | 20.0659 |
56
+ | 0.2756 | 0.1454 | 2000 | 0.2962 | 19.2670 |
57
+ | 0.2405 | 0.2181 | 3000 | 0.2771 | 18.1353 |
58
+ | 0.2917 | 0.2908 | 4000 | 0.2644 | 17.5769 |
59
+ | 0.2117 | 0.3635 | 5000 | 0.2536 | 16.7275 |
60
+ | 0.2334 | 0.4362 | 6000 | 0.2455 | 16.3825 |
61
+ | 0.2408 | 0.5089 | 7000 | 0.2418 | 16.0707 |
62
 
63
 
64
  ### Framework versions
runs/Jan18_10-16-32_gnode34/events.out.tfevents.1737191804.gnode34.1021011.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cb46dde305a6e0c7271ef5fef5374b2ce6cb73a38f8169b167d4accb63ce7946
3
- size 68031
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b24fb109bf7258ff518466c1ce1b9023428d794505249e0b59217446ac9e6f46
3
+ size 68385