chsougan commited on
Commit
fcb1619
·
verified ·
1 Parent(s): cdf905e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -3
README.md CHANGED
@@ -1,10 +1,20 @@
1
  ---
2
- license: cc-by-4.0
3
  language:
4
  - es
5
  base_model:
6
  - pyannote/segmentation-3.0
7
  library_name: pyannote-audio
 
 
 
 
 
 
 
 
 
 
8
  ---
9
  # pyannote-segmentation-3.0-RTVE-primary
10
 
@@ -31,7 +41,7 @@ This system is intented to be used for speaker diarization of TV shows.
31
 
32
  ## Usage
33
 
34
- The instructions to obtain the RTTM output of each model can be found [here](https://huggingface.co/pyannote/speaker-diarization-3.1), using [this configuration file](config_diarization-3.1.yaml)
35
 
36
  Once obtained, [this script](https://huggingface.co/chsougan/pyannote-segmentation-3.0-RTVE-primary/blob/main/primary_fusion.py) can be modified to obtain the fusion of each model's output.
37
 
@@ -104,4 +114,4 @@ If you use these models, please cite:
104
  pages = {327--330},
105
  doi = {10.21437/IberSPEECH.2024-68},
106
  }
107
- ````
 
1
  ---
2
+ license: apache-2.0
3
  language:
4
  - es
5
  base_model:
6
  - pyannote/segmentation-3.0
7
  library_name: pyannote-audio
8
+ tags:
9
+ - pyannote
10
+ - pyannote-audio
11
+ - audio
12
+ - voice
13
+ - speech
14
+ - speaker
15
+ - speaker-diarization
16
+ - segmentation
17
+ pipeline_tag: automatic-speech-recognition
18
  ---
19
  # pyannote-segmentation-3.0-RTVE-primary
20
 
 
41
 
42
  ## Usage
43
 
44
+ The instructions to obtain the RTTM output of each model can be found [here](https://huggingface.co/pyannote/speaker-diarization-3.1), using this [configuration file](config_diarization-3.1.yaml)
45
 
46
  Once obtained, [this script](https://huggingface.co/chsougan/pyannote-segmentation-3.0-RTVE-primary/blob/main/primary_fusion.py) can be modified to obtain the fusion of each model's output.
47
 
 
114
  pages = {327--330},
115
  doi = {10.21437/IberSPEECH.2024-68},
116
  }
117
+ ````