Commit
·
02ff6ab
1
Parent(s):
6c87c91
Update README
Browse files
README.md
CHANGED
@@ -1,3 +1,85 @@
|
|
1 |
---
|
|
|
|
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
|
3 |
+
# Doc / guide: https://huggingface.co/docs/hub/model-cards
|
4 |
license: apache-2.0
|
5 |
+
language:
|
6 |
+
- zh
|
7 |
+
widget:
|
8 |
+
- text: >-
|
9 |
+
A chat between a curious user and an artificial intelligence assistant.
|
10 |
+
The assistant gives helpful, detailed, and polite answers to the user's
|
11 |
+
questions. USER: 你好,請問你可以幫我寫一封推薦信嗎? ASSISTANT:
|
12 |
+
library_name: transformers
|
13 |
+
pipeline_tag: text-generation
|
14 |
---
|
15 |
+
|
16 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/mIie6Mc6k_Uv9UZKXC_hw.png)
|
17 |
+
|
18 |
+
# 🌟 Checkout [Taiwan-LLM Demo Chat-UI](http://www.twllm.com) 🌟
|
19 |
+
|
20 |
+
# Model Card for Taiwan LLM 8x7B-DPO
|
21 |
+
|
22 |
+
Taiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan.
|
23 |
+
|
24 |
+
|
25 |
+
## Model description
|
26 |
+
|
27 |
+
- **Model type:** A 8x7B parameter Mixtral MoE model fine-tuned on a mix of publicly available, synthetic datasets.
|
28 |
+
- **Language(s) (NLP):** Primarily Traditional Chinese (zh-tw)
|
29 |
+
- **Finetuned from model:** [yentinglin/Taiwan-LLM-MoE-alpha](https://huggingface.co/yentinglin/Taiwan-LLM-MoE-alpha)
|
30 |
+
|
31 |
+
### Model Sources
|
32 |
+
|
33 |
+
<!-- Provide the basic links for the model. -->
|
34 |
+
|
35 |
+
- **Repository:** https://github.com/MiuLab/Taiwan-LLaMa
|
36 |
+
- **Demo:** https://twllm.com/
|
37 |
+
|
38 |
+
## Performance
|
39 |
+
|
40 |
+
Checkout leaderboard in [Tw Chatbot Arena](https://arena.twllm.com/)
|
41 |
+
|
42 |
+
TMMLUS+ score:
|
43 |
+
- yentinglin/Taiwan-LLM-MoE-alpha: 43.93
|
44 |
+
- yentinglin/Taiwan-LLM-8x7B-DPO: TBD
|
45 |
+
|
46 |
+
## Intended uses
|
47 |
+
|
48 |
+
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
|
49 |
+
|
50 |
+
```python
|
51 |
+
# pip install transformers>=4.34
|
52 |
+
# pip install accelerate
|
53 |
+
|
54 |
+
import torch
|
55 |
+
from transformers import pipeline
|
56 |
+
|
57 |
+
pipe = pipeline("text-generation", model="yentinglin/Taiwan-LLM-8x7B-DPO", torch_dtype=torch.bfloat16, device_map="auto")
|
58 |
+
|
59 |
+
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
|
60 |
+
messages = [
|
61 |
+
{
|
62 |
+
"role": "system",
|
63 |
+
"content": "你是一個人工智慧助理",
|
64 |
+
},
|
65 |
+
{"role": "user", "content": "東北季風如何影響台灣氣候?"},
|
66 |
+
]
|
67 |
+
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
68 |
+
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
69 |
+
print(outputs[0]["generated_text"])
|
70 |
+
```
|
71 |
+
|
72 |
+
## Citation
|
73 |
+
|
74 |
+
If you find Taiwan LLM useful in your work, please cite it with:
|
75 |
+
|
76 |
+
```
|
77 |
+
@misc{lin2023taiwan,
|
78 |
+
title={Taiwan LLM: Bridging the Linguistic Divide with a Culturally Aligned Language Model},
|
79 |
+
author={Yen-Ting Lin and Yun-Nung Chen},
|
80 |
+
year={2023},
|
81 |
+
eprint={2311.17487},
|
82 |
+
archivePrefix={arXiv},
|
83 |
+
primaryClass={cs.CL}
|
84 |
+
}
|
85 |
+
```
|