xianbin commited on
Commit
a7bb812
·
verified ·
1 Parent(s): 5282321

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -6
README.md CHANGED
@@ -4,10 +4,10 @@ license: mit
4
  # SEA-LION-7B-Instruct
5
 
6
  SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
7
- The size of the models range from 3 billion to 7 billion parameters.
8
 
9
  SEA-LION-7B-Instruct is a multilingual model which has been fine-tuned with **thousands of English and Indonesian instruction-completion pairs** alongside a smaller pool of instruction-completion pairs from other ASEAN languages.
10
- These instructions have been carefully curated and rewritten to ensure the model is trained on truly open, commercially permissive and high quality datasets.
11
 
12
  SEA-LION stands for _Southeast Asian Languages In One Network_.
13
 
@@ -19,7 +19,7 @@ SEA-LION stands for _Southeast Asian Languages In One Network_.
19
 
20
  ## Model Details
21
  ### Base model
22
- We perform instruction tuning in English and Indonesian on our [pre-trained SEA-LION-7B](https://huggingface.co/aisingapore/sea-lion-7b), a decoder model using the MPT architecture, to create SEA-LION-7B-Instruct.
23
 
24
  ### Benchmark Performance
25
  We evaluated SEA-LION-7B-Instruct on the BHASA benchmark ([arXiv](https://arxiv.org/abs/2309.06085v2) and [GitHub](https://github.com/aisingapore/bhasa)) across a variety of tasks.
@@ -131,8 +131,7 @@ For more info, please contact us using this [SEA-LION Inquiry Form](https://form
131
 
132
  ## Disclaimer
133
 
134
- This the repository for the commercial instruction-tuned model.
135
  The model has _not_ been aligned for safety.
136
  Developers and users should perform their own safety fine-tuning and related security measures.
137
- In no event shall the authors be held liable for any claim, damages, or other liability
138
- arising from the use of the released weights and codes.
 
4
  # SEA-LION-7B-Instruct
5
 
6
  SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
7
+ The sizes of the models range from 3 billion to 7 billion parameters.
8
 
9
  SEA-LION-7B-Instruct is a multilingual model which has been fine-tuned with **thousands of English and Indonesian instruction-completion pairs** alongside a smaller pool of instruction-completion pairs from other ASEAN languages.
10
+ These instructions have been carefully curated and rewritten to ensure the model was trained on truly open, commercially permissive and high quality datasets.
11
 
12
  SEA-LION stands for _Southeast Asian Languages In One Network_.
13
 
 
19
 
20
  ## Model Details
21
  ### Base model
22
+ We performed instruction tuning in English and Indonesian on our [pre-trained SEA-LION-7B](https://huggingface.co/aisingapore/sea-lion-7b), a decoder model using the MPT architecture, to create SEA-LION-7B-Instruct.
23
 
24
  ### Benchmark Performance
25
  We evaluated SEA-LION-7B-Instruct on the BHASA benchmark ([arXiv](https://arxiv.org/abs/2309.06085v2) and [GitHub](https://github.com/aisingapore/bhasa)) across a variety of tasks.
 
131
 
132
  ## Disclaimer
133
 
134
+ This is the repository for the commercial instruction-tuned model.
135
  The model has _not_ been aligned for safety.
136
  Developers and users should perform their own safety fine-tuning and related security measures.
137
+ In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.