Henok commited on
Commit
b7a3fe4
·
verified ·
1 Parent(s): d021ad4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -8
README.md CHANGED
@@ -58,15 +58,31 @@ The dataset is available in both CSV and JSON formats, with columns tagged for e
58
 
59
  If you use this dataset, please cite our paper:
60
 
 
61
  ```
62
- @inproceedings{
63
- ademtew2024age,
64
- author = {Henok Biadglign Ademtew and Mikiyas Girma Birbo},
65
- title = {{AGE}: Amharic, Ge{\textquoteright}ez and English Parallel Dataset},
66
- booktitle = {5th Workshop on African Natural Language Processing},
67
- year = {2024},
68
- url = {https://openreview.net/forum?id=tHNfskz2WG}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
69
  }
70
  ```
71
 
72
-
 
58
 
59
  If you use this dataset, please cite our paper:
60
 
61
+
62
  ```
63
+ @inproceedings{ademtew-birbo-2024-age,
64
+ title = "{AGE}: {A}mharic, {G}e{'}ez and {E}nglish Parallel Dataset",
65
+ author = "Ademtew, Henok and
66
+ Birbo, Mikiyas",
67
+ editor = "Ojha, Atul Kr. and
68
+ Liu, Chao-hong and
69
+ Vylomova, Ekaterina and
70
+ Pirinen, Flammie and
71
+ Abbott, Jade and
72
+ Washington, Jonathan and
73
+ Oco, Nathaniel and
74
+ Malykh, Valentin and
75
+ Logacheva, Varvara and
76
+ Zhao, Xiaobing",
77
+ booktitle = "Proceedings of the Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
78
+ month = aug,
79
+ year = "2024",
80
+ address = "Bangkok, Thailand",
81
+ publisher = "Association for Computational Linguistics",
82
+ url = "https://aclanthology.org/2024.loresmt-1.14",
83
+ doi = "10.18653/v1/2024.loresmt-1.14",
84
+ pages = "139--145",
85
+ abstract = "African languages are not well-represented in Natural Language Processing (NLP). The main reason is a lack of resources for training models. Low-resource languages, such as Amharic and Ge{'}ez, cannot benefit from modern NLP methods because of the lack of high-quality datasets. This paper presents AGE, an open-source tripartite alignment of Amharic, Ge{'}ez, and English parallel dataset. Additionally, we introduced a novel, 1,000 Ge{'}ez-centered sentences sourced from areas such as news and novels. Furthermore, we developed a model from a multilingual pre-trained language model, which brings 12.29 and 30.66 for English-Ge{'}ez and Ge{'}ez to English, respectively, and 9.39 and 12.29 for Amharic-Ge{'}ez and Ge{'}ez-Amharic respectively.",
86
  }
87
  ```
88