File size: 4,810 Bytes
3595147
 
 
 
ba605e4
 
 
 
08eb8ae
 
3595147
 
 
 
08eb8ae
 
 
3595147
42baa8b
3595147
e930cb2
08eb8ae
e930cb2
 
 
 
3595147
ba605e4
 
d7e6c18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3595147
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ba605e4
3595147
ba605e4
 
08eb8ae
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ba605e4
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
language:
- et
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- conversational
base_model:
- tartuNLP/Llammas-base
---

# LLammas 🐑

Llama-2-7B instruction-tuned for Estonian in two stages:
1. Continued pre-training: 5B tokens of CulturaX with 75% of documents in Estonain and 25% in English (see [Llammas-base](https://huggingface.co/tartuNLP/Llammas-base)),
2. Instruction-tuning: Alpaca-cleaned, Alpaca-est, OASST1 top-1 English conversations, CoT and FLAN-V2 following open-instruct (both 10,000), WMT18 English-Estonian translation development data (as documents), general MTee validation English-Estonian held-out data.

[Alpaca-est](https://github.com/TartuNLP/alpaca-est) is an instruction dataset generated for Estonian with *gpt-3.5-turbo-0613*, following Alpaca. More details in our [paper](https://arxiv.org/abs/2404.04042).

Additional resources:
* Paper: [https://aclanthology.org/2024.findings-naacl.210/](https://aclanthology.org/2024.findings-naacl.210/)
* Code: [github.com/TartuNLP/llammas](https://github.com/TartuNLP/llammas)
* Base model: [tartuNLP/Llammas-base](https://huggingface.co/tartuNLP/Llammas-base)
* 4-bit quantized model in GGUF: [AlbertUnn/LlammasGGUF](https://huggingface.co/AlbertUnn/LlammasGGUF)
* Alpaca-est dataset: [github.com/TartuNLP/alpaca-est](https://github.com/TartuNLP/alpaca-est)

### Using the model



Using the model in a text-generation pipeline:
```
from transformers import pipeline
import torch

pipe = pipeline("text-generation", model="tartuNLP/Llammas", torch_dtype=torch.bfloat16, device_map="auto")

messages = [
    {"role": "user", "content": "Tere!"},
    {"role": "assistant", "content": "Tere! Kas saaksin teid kuidagi aidata?"},
    {"role": "user", "content": "Kuidas alustada kirja kirjutamist?"}
]

prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.6, top_k=50, top_p=0.9)
print(outputs[0]["generated_text"][len(prompt):])
```


Using the model in a conversational pipeline (works with transformers==4.36.2, issues with output in newer versions):
```
from transformers import pipeline, Conversation
import torch

pipe = pipeline("conversational", model="tartuNLP/Llammas", torch_dtype=torch.bfloat16, device_map="auto")

messages = [
    {"role": "user", "content": "Tere!"},
    {"role": "assistant", "content": "Tere! Kas saaksin teid kuidagi aidata?"},
    {"role": "user", "content": "Kuidas alustada kirja kirjutamist?"}
]

conversation = Conversation(messages)
conversation = pipe(conversation)
```

Conversational format:
```
<|user|>
Tere!
<|assistant|>
Tere! Kas saaksin teid kuidagi aidata?</s>
<|user|>
Kuidas alustada kirja kirjutamist?
<|assistant|>
Kirja kirjutamiseks alustage tervitusega, näiteks "Tere!" või "Tere hommikust!". Seejärel tutvustage ennast ja mainige, kellega kirjutate. Kirjeldage oma mõtteid või küsimusi, mida soovite arutada. Lõpetage kiri viisakalt, näiteks "Tänan teid tähelepanu eest!" või "Parimate soovidega!"</s>
```

### Citation
```
@inproceedings{kuulmets-etal-2024-teaching,
    title = "Teaching Llama a New Language Through Cross-Lingual Knowledge Transfer",
    author = "Kuulmets, Hele-Andra  and
      Purason, Taido  and
      Luhtaru, Agnes  and
      Fishel, Mark",
    editor = "Duh, Kevin  and
      Gomez, Helena  and
      Bethard, Steven",
    booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
    month = jun,
    year = "2024",
    address = "Mexico City, Mexico",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.findings-naacl.210",
    doi = "10.18653/v1/2024.findings-naacl.210",
    pages = "3309--3325",
    abstract = "This paper explores cost-efficient methods to adapt pretrained Large Language Models (LLMs) to new lower-resource languages, with a specific focus on Estonian. Leveraging the Llama 2 model, we investigate the impact of combining cross-lingual instruction-tuning with additional monolingual pretraining. Our results demonstrate that even a relatively small amount of additional monolingual pretraining followed by cross-lingual instruction-tuning significantly enhances results on Estonian. Furthermore, we showcase cross-lingual knowledge transfer from high-quality English instructions to Estonian, resulting in improvements in commonsense reasoning and multi-turn conversation capabilities. Our best model, named Llammas, represents the first open-source instruction-following LLM for Estonian. Additionally, we publish Alpaca-est, the first general task instruction dataset for Estonia. These contributions mark the initial progress in the direction of developing open-source LLMs for Estonian.",
}

```