Commit
·
0d75404
1
Parent(s):
13631d7
Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ metrics:
|
|
13 |
|
14 |
# End-to-end SLU model for Timers and Such
|
15 |
|
16 |
-
Attention-based RNN sequence-to-sequence model for [Timers and Such](https://
|
17 |
|
18 |
The model uses an ASR model trained on LibriSpeech ([`speechbrain/asr-crdnn-rnnlm-librispeech`](https://huggingface.co/speechbrain/asr-crdnn-rnnlm-librispeech)) to extract features from the input audio, then maps these features to an intent and slot labels using a beam search.
|
19 |
|
@@ -35,7 +35,7 @@ title = {SpeechBrain},
|
|
35 |
year = {2021},
|
36 |
publisher = {GitHub},
|
37 |
journal = {GitHub repository},
|
38 |
-
howpublished = {
|
39 |
}
|
40 |
```
|
41 |
|
|
|
13 |
|
14 |
# End-to-end SLU model for Timers and Such
|
15 |
|
16 |
+
Attention-based RNN sequence-to-sequence model for [Timers and Such](https://arxiv.org/abs/2104.01604) trained on the `train-real` subset. This model checkpoint achieves 86.7% accuracy on `test-real`.
|
17 |
|
18 |
The model uses an ASR model trained on LibriSpeech ([`speechbrain/asr-crdnn-rnnlm-librispeech`](https://huggingface.co/speechbrain/asr-crdnn-rnnlm-librispeech)) to extract features from the input audio, then maps these features to an intent and slot labels using a beam search.
|
19 |
|
|
|
35 |
year = {2021},
|
36 |
publisher = {GitHub},
|
37 |
journal = {GitHub repository},
|
38 |
+
howpublished = {\\\\\\\\url{https://github.com/speechbrain/speechbrain}},
|
39 |
}
|
40 |
```
|
41 |
|