Update README.md
Browse files
README.md
CHANGED
@@ -23,5 +23,55 @@ configs:
|
|
23 |
- split: test
|
24 |
path: data/ru-en.json
|
25 |
---
|
|
|
26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
|
|
|
23 |
- split: test
|
24 |
path: data/ru-en.json
|
25 |
---
|
26 |
+
## IdiomsInCtx-MT Dataset
|
27 |
|
28 |
+
This repository contains the IdiomsInCtx-MT dataset used in our ACL 2024 paper: [The Fine-Tuning Paradox: Boosting Translation Quality Without Sacrificing LLM Abilities]([https://arxiv.org/abs/2405.20089](https://aclanthology.org/2024.acl-long.336/)). See [this GitHub repo](https://github.com/amazon-science/idioms-incontext-mt) for the origin of the data.
|
29 |
+
|
30 |
+
### Description
|
31 |
+
The dataset consists of idiomatic expressions in context and their human-written translations. There are 1000 translations per direction. The dataset covers 2 language pairs (English-German and English-Russian) with 3 translation directions:
|
32 |
+
1. English → German (`en-de`)
|
33 |
+
2. German → English (`de-en`)
|
34 |
+
3. Russian → English (`ru-en`)
|
35 |
+
|
36 |
+
The dataset is designed to evaluate the performance of large language models and machine translation systems in handling idiomatic expressions, which can be challenging due to their non-literal meanings.
|
37 |
+
|
38 |
+
### Usage
|
39 |
+
|
40 |
+
```python
|
41 |
+
>>> dataset = load_dataset("davidstap/IdiomsInCtx-MT", "de-en") # available directions: de-en, en-de, ru-en
|
42 |
+
>>> dataset
|
43 |
+
DatasetDict({
|
44 |
+
test: Dataset({
|
45 |
+
features: ['de', 'en'],
|
46 |
+
num_rows: 1000
|
47 |
+
})
|
48 |
+
})
|
49 |
+
>>> dataset['test']['de'][0]
|
50 |
+
'Es ist mir wurst, wenn du nicht kommst.'
|
51 |
+
>>> dataset['test']['en'][0]
|
52 |
+
"I couldn't care less if you don't come."
|
53 |
+
```
|
54 |
+
|
55 |
+
### Citation
|
56 |
+
If you use this dataset in your work, please cite our paper:
|
57 |
+
|
58 |
+
```
|
59 |
+
@inproceedings{stap-etal-2024-fine,
|
60 |
+
title = "The Fine-Tuning Paradox: Boosting Translation Quality Without Sacrificing {LLM} Abilities",
|
61 |
+
author = "Stap, David and
|
62 |
+
Hasler, Eva and
|
63 |
+
Byrne, Bill and
|
64 |
+
Monz, Christof and
|
65 |
+
Tran, Ke",
|
66 |
+
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
|
67 |
+
year = "2024",
|
68 |
+
publisher = "Association for Computational Linguistics",
|
69 |
+
url = "https://aclanthology.org/2024.acl-long.336",
|
70 |
+
pages = "6189--6206",
|
71 |
+
}
|
72 |
+
```
|
73 |
+
|
74 |
+
### License
|
75 |
+
|
76 |
+
This dataset is licensed under the CC-BY-NC-4.0 License.
|
77 |
|