Transformers
English
Inference Endpoints
kallebysantos commited on
Commit
b186d3d
·
1 Parent(s): cd35537

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +150 -0
README.md ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ datasets:
7
+ - togethercomputer/llama-instruct
8
+ ---
9
+
10
+ # Llama-2-7B-32K-Instruct Quantized
11
+
12
+ ## Model Description
13
+
14
+ Llama-2-7B-32K-Instruct is an open-source, long-context chat model finetuned from [Llama-2-7B-32K](https://huggingface.co/togethercomputer/Llama-2-7B-32K), over high-quality instruction and chat data.
15
+ We built Llama-2-7B-32K-Instruct with less than 200 lines of Python script using [Together API](https://together.ai/blog/api-announcement), and we also make the [recipe fully available](https://github.com/togethercomputer/Llama-2-7B-32K-Instruct).
16
+ We hope that this can enable everyone to finetune their own version of [Llama-2-7B-32K](https://huggingface.co/togethercomputer/Llama-2-7B-32K) — play with [Together API](https://together.ai/blog/api-announcement) and give us feedback!
17
+
18
+ ## Data Collection Details
19
+
20
+ Llama-2-7B-32K-Instruct is fine-tuned over a combination of two parts:
21
+ 1. **19K single- and multi-round conversations generated by human instructions and [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) outputs**.
22
+ We collected the dataset following the distillation paradigm that is used by Alpaca, Vicuna, WizardLM, Orca — producing instructions by querying a powerful LLM (in this case, [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)).
23
+ The complete dataset is also released [here](https://huggingface.co/datasets/togethercomputer/llama-instruct).
24
+ We also share the complete recipe for the data collection process [here](https://github.com/togethercomputer/Llama-2-7B-32K-Instruct).
25
+
26
+ 2. **Long-context Summarization and Long-context QA**.
27
+ We follow the recipe of [Llama-2-7B-32K](https://together.ai/blog/Llama-2-7B-32K), and train our model with the [BookSum dataset](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections) and [Multi-document Question Answering](https://arxiv.org/abs/2307.03172).
28
+
29
+ The final data mixture used for model finetuning is: 19K instruction (50%) + BookSum (25%) + MQA (25%).
30
+
31
+ ## Model Usage
32
+
33
+ We encourage you to try out this model using the [Together API](https://together.ai/blog/api-announcement). The updated inference stack allows for efficient inference.
34
+
35
+ To run the model locally, we strongly recommend to install Flash Attention V2, which is necessary to obtain the best performance:
36
+ ```
37
+ # Please update the path of `CUDA_HOME`
38
+ export CUDA_HOME=/usr/local/cuda-11.8
39
+ pip install transformers==4.31.0
40
+ pip install sentencepiece
41
+ pip install ninja
42
+ pip install flash-attn --no-build-isolation
43
+ pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary
44
+ ```
45
+ You can load the model directly from the Hugging Face model hub using
46
+ ```python
47
+ import torch
48
+ from transformers import AutoTokenizer, AutoModelForCausalLM
49
+
50
+ tokenizer = AutoTokenizer.from_pretrained("togethercomputer/Llama-2-7B-32K-Instruct")
51
+ model = AutoModelForCausalLM.from_pretrained("togethercomputer/Llama-2-7B-32K-Instruct",
52
+ trust_remote_code=True, torch_dtype=torch.float16)
53
+ input_ids = tokenizer.encode("[INST]\nWrite a poem about cats\n[/INST]\n\n", return_tensors="pt")
54
+ output = model.generate(input_ids, max_length=128,
55
+ temperature=0.7, repetition_penalty=1.1, top_p=0.7, top_k=50)
56
+ output_text = tokenizer.decode(output[0], skip_special_tokens=True)
57
+ ```
58
+
59
+ The model is also hosted on [Together Playground](https://api.together.xyz/playground). You can simply play with the model by using prompt formatted by:
60
+
61
+ ```
62
+ [INST]\n<your instruction here>\n[\INST]\n\n
63
+ ```
64
+
65
+ For example, if we query the model with
66
+
67
+ ```
68
+ [INST]\nWrite a poem about cats\n[\INST]\n\n
69
+ ```
70
+
71
+ the model will return
72
+ ```
73
+ [INST]
74
+ Write a poem about cats
75
+ [/INST]
76
+
77
+ Cats are mysterious creatures, with their aloof and independent nature. They're also incredibly beautiful, with their sleek fur and piercing eyes. Here's a poem that captures the essence of cats:
78
+
79
+ Cats, oh cats, how can I describe you?
80
+ Your beauty is beyond compare, it seems.
81
+ You're graceful and elegant, like a ballerina's dance,
82
+ But don't let your charm fool you, for you're not easily tamed.
83
+
84
+ With your soft purring and playful meows,
85
+ You draw us in with your enchanting powers.
86
+ We love to watch you play, your tail twirling 'round,
87
+ As if you're dancing on air, with no sound.
88
+
89
+ But don't be fooled by your sweetness, my friend,
90
+ For beneath that gentle exterior, lies a fierce defender.
91
+ When danger lurks, you'll spring into action,
92
+ Protecting those you hold dear, without question.
93
+
94
+ Solet us admire you, from afar,
95
+ For in your own way, you're truly unique, a star.
96
+ And though we may never fully understand,
97
+ The depths of your soul, we'll always stand, hand in paw, as one.
98
+
99
+ This poem captures the essence of cats, highlighting their beauty, independence,and protective nature. It also celebrates the special bond between humans and cats, recognizing their unique qualities and the joy they bring to our lives.
100
+ ```
101
+
102
+ ## Model Evaluation
103
+
104
+ We evaluate the model from three aspects: 1) [Alpaca Eval](https://tatsu-lab.github.io/alpaca_eval/);
105
+ 2) [Rouge score over BookSum](https://together.ai/blog/Llama-2-7B-32K); and
106
+ 3) [Accuracy over Multi-document Question Answering (MQA)](https://together.ai/blog/Llama-2-7B-32K).
107
+ We compare with models including
108
+ [GPT-3.5-Turbo-16K](https://platform.openai.com/docs/models/gpt-3-5),
109
+ [https://huggingface.co/meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),
110
+ [Longchat-7b-16k](https://huggingface.co/lmsys/longchat-7b-16k)
111
+ and [Longchat-7b-v1.5-32k](https://huggingface.co/lmsys/longchat-7b-v1.5-32k).
112
+ We summarize the results below:
113
+
114
+ * Alpaca Eval
115
+ | Model | win_rate | standard_error | n_total | avg_length |
116
+ | -------- | ------- | ------- | ------- | ------- |
117
+ | Llama-2-7B-Chat-hf | 71.37 | 1.59 | 805 | 1479 |
118
+ | Llama-2-7B-32K-Instruct | 70.36 | 1.61 | 803 | 1885 |
119
+ | oasst-rlhf-llama-33b | 66.52 | 1.66 | 805 | 1079 |
120
+ | text_davinci_003 | 50.00 | 0.00 | 805 | 307|
121
+ | falcon-40b-instruct | 45.71 | 1.75 | 805 | 662 |
122
+ | alpaca-farm-ppo-human | 41.24 | 1.73 | 805 | 803 |
123
+ | alpaca-7b | 26.46 | 1.54 | 805 | 396 |
124
+ | text_davinci_001 | 15.17 | 1.24 | 804 | 296 |
125
+
126
+ * Rouge Score over BookSum
127
+ | Model | R1 | R2 | RL |
128
+ | -------- | ------- | ------- | ------- |
129
+ | Llama-2-7B-Chat-hf | 0.055 | 0.008 | 0.046 |
130
+ | Longchat-7b-16k | 0.303 | 0.055 | 0.160 |
131
+ | Longchat-7b-v1.5-32k | 0.308 | 0.057 | 0.163 |
132
+ | GPT-3.5-Turbo-16K | 0.324 | 0.066 | 0.178 |
133
+ | Llama-2-7B-32K-Instruct (ours) | 0.336 | 0.076 | 0.184 |
134
+
135
+ * Accuracy over MQA
136
+ | Model | 20 docs (Avg 2.9K tokens) | 30 docs (Avg 4.4K tokens) | 50 docs (Avg 7.4K tokens) |
137
+ | -------- | ------- | ------- | ------- |
138
+ | Llama-2-7B-Chat-hf | 0.448 | 0.421 | 0.354 |
139
+ | Longchat-7b-16k | 0.510 | 0.473 | 0.428 |
140
+ | Longchat-7b-v1.5-32k | 0.534 | 0.516 | 0.479 |
141
+ | GPT-3.5-Turbo-16K | 0.622 | 0.609 | 0.577 |
142
+ | Llama-2-7B-32K-Instruct (ours) | 0.622 | 0.604 | 0.589 |
143
+
144
+ ## Limitations and Bias
145
+
146
+ As with all language models, Llama-2-7B-32K-Instruct may generate incorrect or biased content. It's important to keep this in mind when using the model.
147
+
148
+ ## Community
149
+
150
+ Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)