mrapacz commited on
Commit
901c05c
·
verified ·
1 Parent(s): 22fd18b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +60 -2
README.md CHANGED
@@ -12,14 +12,16 @@ datasets:
12
  ---
13
  # Model Card for Ancient Greek to Polish Interlinear Translation Model
14
 
15
- This model performs interlinear translation from Ancient Greek to {Language}, maintaining word-level alignment between source and target texts.
 
 
16
 
17
  ## Model Details
18
 
19
  ### Model Description
20
 
21
  - **Developed By:** Maciej Rapacz, AGH University of Kraków
22
- - **Model Type:** Neural machine translation (T5-based)
23
  - **Base Model:** PhilTa
24
  - **Tokenizer:** PhilTa
25
  - **Language(s):** Ancient Greek (source) → Polish (target)
@@ -37,3 +39,59 @@ This model performs interlinear translation from Ancient Greek to {Language}, ma
37
 
38
  - **Repository:** https://github.com/mrapacz/loreslm-interlinear-translation
39
  - **Paper:** https://aclanthology.org/2025.loreslm-1.11/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
  # Model Card for Ancient Greek to Polish Interlinear Translation Model
14
 
15
+ This model performs interlinear translation from Ancient Greek to Polish, maintaining word-level alignment between source and target texts.
16
+
17
+ You can find the source code used for training this and other models trained as part of this project in the [GitHub repository](https://github.com/mrapacz/loreslm-interlinear-translation).
18
 
19
  ## Model Details
20
 
21
  ### Model Description
22
 
23
  - **Developed By:** Maciej Rapacz, AGH University of Kraków
24
+ - **Model Type:** MT5ForConditionalGeneration
25
  - **Base Model:** PhilTa
26
  - **Tokenizer:** PhilTa
27
  - **Language(s):** Ancient Greek (source) → Polish (target)
 
39
 
40
  - **Repository:** https://github.com/mrapacz/loreslm-interlinear-translation
41
  - **Paper:** https://aclanthology.org/2025.loreslm-1.11/
42
+
43
+ ## Usage Example
44
+
45
+
46
+
47
+ ```python
48
+ >>> from transformers import AutoModelForSeq2SeqLM, T5TokenizerFast
49
+ >>> text_blocks = ['λεγει', 'αυτω', 'ο', 'ιησους', 'εγειρε', 'αρον', 'τον', 'κραβαττον', 'σου', 'και', 'περιπατει']
50
+ >>> tag_blocks = ['V-PIA-3S', 'PPro-DM3S', 'Art-NMS', 'N-NMS', 'V-PMA-2S', 'V-AMA-2S', 'Art-AMS', 'N-AMS', 'PPro-G2S', 'Conj', 'V-PMA-2S']
51
+ >>> combined_text = []
52
+ >>> for text, tag in zip(text_blocks, tag_blocks):
53
+ ... combined_text.append(f"{text} <extra_id_1>{tag}")
54
+ >>> formatted_text = " <extra_id_0> ".join(combined_text)
55
+ >>> tokenizer = T5TokenizerFast.from_pretrained("mrapacz/interlinear-pl-philta-t-w-t-normalized-bh")
56
+ >>> inputs = tokenizer(
57
+ text=formatted_text,
58
+ return_tensors="pt"
59
+ )
60
+ >>> model = T5ForConditionalGeneration.from_pretrained("mrapacz/interlinear-pl-philta-t-w-t-normalized-bh")
61
+ >>> outputs = model.generate(
62
+ **inputs,
63
+ max_new_tokens=100,
64
+ early_stopping=True,
65
+ )
66
+ >>> tokenizer.decode(outputs[0], skip_special_tokens=True)
67
+ '- zaś - zaś - czynie - czynie - czynie - czynie - czynie - czynie - czynie'
68
+
69
+ ```
70
+
71
+ ## Citation
72
+
73
+ If you use this model, please cite the following paper:
74
+
75
+ ```
76
+ @inproceedings{rapacz-smywinski-pohl-2025-low,
77
+ title = "Low-Resource Interlinear Translation: Morphology-Enhanced Neural Models for {A}ncient {G}reek",
78
+ author = "Rapacz, Maciej and
79
+ Smywi{\'n}ski-Pohl, Aleksander",
80
+ editor = "Hettiarachchi, Hansi and
81
+ Ranasinghe, Tharindu and
82
+ Rayson, Paul and
83
+ Mitkov, Ruslan and
84
+ Gaber, Mohamed and
85
+ Premasiri, Damith and
86
+ Tan, Fiona Anting and
87
+ Uyangodage, Lasitha",
88
+ booktitle = "Proceedings of the First Workshop on Language Models for Low-Resource Languages",
89
+ month = jan,
90
+ year = "2025",
91
+ address = "Abu Dhabi, United Arab Emirates",
92
+ publisher = "Association for Computational Linguistics",
93
+ url = "https://aclanthology.org/2025.loreslm-1.11/",
94
+ pages = "145--165",
95
+ abstract = "Contemporary machine translation systems prioritize fluent, natural-sounding output with flexible word ordering. In contrast, interlinear translation maintains the source text`s syntactic structure by aligning target language words directly beneath their source counterparts. Despite its importance in classical scholarship, automated approaches to interlinear translation remain understudied. We evaluated neural interlinear translation from Ancient Greek to English and Polish using four transformer-based models: two Ancient Greek-specialized (GreTa and PhilTa) and two general-purpose multilingual models (mT5-base and mT5-large). Our approach introduces novel morphological embedding layers and evaluates text preprocessing and tag set selection across 144 experimental configurations using a word-aligned parallel corpus of the Greek New Testament. Results show that morphological features through dedicated embedding layers significantly enhance translation quality, improving BLEU scores by 35{\%} (44.67 {\textrightarrow} 60.40) for English and 38{\%} (42.92 {\textrightarrow} 59.33) for Polish compared to baseline models. PhilTa achieves state-of-the-art performance for English, while mT5-large does so for Polish. Notably, PhilTa maintains stable performance using only 10{\%} of training data. Our findings challenge the assumption that modern neural architectures cannot benefit from explicit morphological annotations. While preprocessing strategies and tag set selection show minimal impact, the substantial gains from morphological embeddings demonstrate their value in low-resource scenarios."
96
+ }
97
+ ```