Update README.md
Browse files
README.md
CHANGED
@@ -125,6 +125,10 @@ similarities = query_emb.matmul(ctx_emb.transpose(0, 1)) # (1, num_ctx)
|
|
125 |
ranked_results = torch.argsort(similarities, dim=-1, descending=True) # (1, num_ctx)
|
126 |
```
|
127 |
|
|
|
|
|
|
|
|
|
128 |
## License
|
129 |
Dragon-multiturn is built on top of [Dragon](https://arxiv.org/abs/2302.07452). We refer users to the original license of the Dragon model. Dragon-multiturn is also subject to the [Terms of Use](https://openai.com/policies/terms-of-use).
|
130 |
|
|
|
125 |
ranked_results = torch.argsort(similarities, dim=-1, descending=True) # (1, num_ctx)
|
126 |
```
|
127 |
|
128 |
+
## Evaluations on Multi-Turn QA Retrieval Benchmark
|
129 |
+
**(UPDATE!!)** We evaluate multi-turn QA retrieval on five datasets: Doc2Dial, QuAC, QReCC, TopiOCQA, and INSCIT, which can be found in the [ChatRAG Bench](https://huggingface.co/datasets/nvidia/ChatRAG-Bench). The evaluation scripts can be found [here](https://huggingface.co/nvidia/dragon-multiturn-query-encoder/tree/main/evaluation).
|
130 |
+
|
131 |
+
|
132 |
## License
|
133 |
Dragon-multiturn is built on top of [Dragon](https://arxiv.org/abs/2302.07452). We refer users to the original license of the Dragon model. Dragon-multiturn is also subject to the [Terms of Use](https://openai.com/policies/terms-of-use).
|
134 |
|