alexmarques commited on
Commit
9805106
·
verified ·
1 Parent(s): 57e82a4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +295 -0
README.md ADDED
@@ -0,0 +1,295 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - int8
4
+ - vllm
5
+ language:
6
+ - en
7
+ - de
8
+ - fr
9
+ - it
10
+ - pt
11
+ - hi
12
+ - es
13
+ - th
14
+ pipeline_tag: text-generation
15
+ license: llama3.1
16
+ base_model: meta-llama/Meta-Llama-3.1-405B-Instruct
17
+ ---
18
+
19
+ # Meta-Llama-3.1-405B-Instruct-quantized.w8a8
20
+
21
+ ## Model Overview
22
+ - **Model Architecture:** Meta-Llama-3
23
+ - **Input:** Text
24
+ - **Output:** Text
25
+ - **Model Optimizations:**
26
+ - **Activation quantization:** INT8
27
+ - **Weight quantization:** INT8
28
+ - **Intended Use Cases:** Intended for commercial and research use multiple languages. Similarly to [Meta-Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct), this models is intended for assistant-like chat.
29
+ - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws).
30
+ - **Release Date:** 8/19/2024
31
+ - **Version:** 1.0
32
+ - **License(s):** Llama3.1
33
+ - **Model Developers:** Neural Magic
34
+
35
+ Quantized version of [Meta-Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct).
36
+ It achieves scores within 1% of the scores of the unquantized model for MMLU, ARC-Challenge, GSM-8k, Hellaswag, Winogrande and TruthfulQA.
37
+
38
+ ### Model Optimizations
39
+
40
+ This model was obtained by quantizing the weights of [Meta-Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct) to INT8 data type.
41
+ This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x).
42
+ Weight quantization also reduces disk size requirements by approximately 50%.
43
+
44
+ Only weights and activations of the linear operators within transformers blocks are quantized.
45
+ Weights are quantized with a symmetric static per-channel scheme, where a fixed linear scaling factor is applied between INT8 and floating point representations for each output channel dimension.
46
+ Linear scaling factors are computed via by minimizong the mean squarred error (MSE).
47
+ Activations are quantized with a symmetric dynamic per-token scheme, computing a linear scaling factor at runtime for each token between INT8 and floating point representations.
48
+ The [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization, as implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library.
49
+ GPTQ used a 1% damping factor and 512 sequences sequences taken from Neural Magic's [LLM compression calibration dataset](https://huggingface.co/datasets/neuralmagic/LLM_compression_calibration).
50
+
51
+
52
+ ## Deployment
53
+
54
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
55
+
56
+ ```python
57
+ from vllm import LLM, SamplingParams
58
+ from transformers import AutoTokenizer
59
+
60
+ model_id = "neuralmagic/Meta-Llama-3.1-405B-Instruct-quantized.w8a8"
61
+ number_gpus = 8
62
+ max_model_len = 8192
63
+
64
+ sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
65
+
66
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
67
+
68
+ messages = [
69
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
70
+ {"role": "user", "content": "Who are you?"},
71
+ ]
72
+
73
+ prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
74
+
75
+ llm = LLM(model=model_id, tensor_parallel_size=number_gpus, max_model_len=max_model_len)
76
+
77
+ outputs = llm.generate(prompts, sampling_params)
78
+
79
+ generated_text = outputs[0].outputs[0].text
80
+ print(generated_text)
81
+ ```
82
+
83
+ vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
84
+
85
+
86
+ ## Creation
87
+
88
+ This model was created by using the [llm-compressor](https://github.com/vllm-project/llm-compressor) library as presented in the code snipet below (using 8 A100 80GB GPUs).
89
+
90
+ ```python
91
+ from transformers import AutoTokenizer
92
+ from datasets import load_dataset
93
+ from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot
94
+ from llmcompressor.modifiers.quantization import GPTQModifier
95
+ from llmcompressor.transformers.compression.helpers import custom_offload_device_map
96
+
97
+ model_id = "meta-llama/Meta-Llama-3.1-405B-Instruct"
98
+
99
+ num_samples = 512
100
+ max_seq_len = 4096
101
+ num_gpus = 8
102
+ max_memory_per_gpu = "20GB"
103
+
104
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
105
+
106
+ def preprocess_fn(example):
107
+ return {"text": tokenizer.apply_chat_template(example["messages"], add_generation_prompt=False, tokenize=False)}
108
+
109
+ ds = load_dataset("neuralmagic/LLM_compression_calibration", split="train")
110
+ ds = ds.shuffle().select(range(num_samples))
111
+ ds = ds.map(preprocess_fn)
112
+
113
+ recipe = GPTQModifier(
114
+ targets="Linear",
115
+ scheme="W8A8",
116
+ ignore=["lm_head"],
117
+ dampening_frac=0.01,
118
+ observer="mse"
119
+ )
120
+
121
+ device_map = custom_offload_device_map(
122
+ model_id,
123
+ max_memory_per_gpu=max_memory_per_gpu,
124
+ num_gpus=num_gpus,
125
+ torch_dtype="auto",
126
+ )
127
+
128
+ model = SparseAutoModelForCausalLM.from_pretrained(
129
+ model_id,
130
+ device_map="auto",
131
+ )
132
+
133
+ oneshot(
134
+ model=model,
135
+ dataset=ds,
136
+ recipe=recipe,
137
+ max_seq_length=max_seq_len,
138
+ num_calibration_samples=num_samples,
139
+ )
140
+
141
+ model.save_pretrained("Meta-Llama-3.1-405B-Instruct-quantized.w8a8")
142
+ ```
143
+
144
+
145
+ ## Evaluation
146
+
147
+ The model was evaluated on MMLU, ARC-Challenge, GSM-8K, Hellaswag, Winogrande and TruthfulQA.
148
+ Evaluation was conducted using the Neural Magic fork of [lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness/tree/llama_3.1_instruct) (branch llama_3.1_instruct) and the [vLLM](https://docs.vllm.ai/en/stable/) engine.
149
+ This version of the lm-evaluation-harness includes versions of MMLU, ARC-Challenge and GSM-8K that match the prompting style of [Meta-Llama-3.1-Instruct-evals](https://huggingface.co/datasets/meta-llama/Meta-Llama-3.1-405B-Instruct-evals).
150
+
151
+ ### Accuracy
152
+
153
+ #### Open LLM Leaderboard evaluation scores
154
+ <table>
155
+ <tr>
156
+ <td><strong>Benchmark</strong>
157
+ </td>
158
+ <td><strong>Meta-Llama-3.1-405B-Instruct </strong>
159
+ </td>
160
+ <td><strong>Meta-Llama-3.1-405B-Instruct-quantized.w8a8 (this model)</strong>
161
+ </td>
162
+ <td><strong>Recovery</strong>
163
+ </td>
164
+ </tr>
165
+ <tr>
166
+ <td>ARC Challenge (0-shot)
167
+ </td>
168
+ <td>96.93
169
+ </td>
170
+ <td>93.26
171
+ </td>
172
+ <td>100.0%
173
+ </td>
174
+ </tr>
175
+ <tr>
176
+ <td>GSM-8K (CoT, 8-shot, strict-match)
177
+ </td>
178
+ <td>96.44
179
+ </td>
180
+ <td>93.25
181
+ </td>
182
+ <td>100.2%
183
+ </td>
184
+ </tr>
185
+ <tr>
186
+ <td>Hellaswag (10-shot)
187
+ </td>
188
+ <td>88.33
189
+ </td>
190
+ <td>86.28
191
+ </td>
192
+ <td>99.9%
193
+ </td>
194
+ </tr>
195
+ <tr>
196
+ <td>Winogrande (5-shot)
197
+ </td>
198
+ <td>87.21
199
+ </td>
200
+ <td>85.00
201
+ </td>
202
+ <td>100.0%
203
+ </td>
204
+ </tr>
205
+ <tr>
206
+ <td>TruthfulQA (0-shot, mc2)
207
+ </td>
208
+ <td>64.64
209
+ </td>
210
+ <td>60.88
211
+ </td>
212
+ <td>101.8%
213
+ </td>
214
+ </tr>
215
+ </table>
216
+
217
+ ### Reproduction
218
+
219
+ The results were obtained using the following commands:
220
+
221
+ #### MMLU
222
+ ```
223
+ lm_eval \
224
+ --model vllm \
225
+ --model_args pretrained="neuralmagic/Meta-Llama-3.1-405B-Instruct-quantized.w8a8",dtype=auto,add_bos_token=True,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=8 \
226
+ --tasks mmlu_llama_3.1_instruct \
227
+ --fewshot_as_multiturn \
228
+ --apply_chat_template \
229
+ --num_fewshot 5 \
230
+ --batch_size auto
231
+ ```
232
+
233
+ #### MMLU-CoT
234
+ ```
235
+ lm_eval \
236
+ --model vllm \
237
+ --model_args pretrained="neuralmagic/Meta-Llama-3.1-405B-Instruct-quantized.w8a8",dtype=auto,add_bos_token=True,max_model_len=4064,max_gen_toks=1024,tensor_parallel_size=8 \
238
+ --tasks mmlu_cot_0shot_llama_3.1_instruct \
239
+ --apply_chat_template \
240
+ --num_fewshot 0 \
241
+ --batch_size auto
242
+ ```
243
+
244
+ #### ARC-Challenge
245
+ ```
246
+ lm_eval \
247
+ --model vllm \
248
+ --model_args pretrained="neuralmagic/Meta-Llama-3.1-405B-Instruct-quantized.w8a8",dtype=auto,add_bos_token=True,max_model_len=3940,max_gen_toks=100,tensor_parallel_size=8 \
249
+ --tasks arc_challenge_llama_3.1_instruct \
250
+ --apply_chat_template \
251
+ --num_fewshot 0 \
252
+ --batch_size auto
253
+ ```
254
+
255
+ #### GSM-8K
256
+ ```
257
+ lm_eval \
258
+ --model vllm \
259
+ --model_args pretrained="neuralmagic/Meta-Llama-3.1-405B-Instruct-quantized.w8a8",dtype=auto,add_bos_token=True,max_model_len=4096,max_gen_toks=1024,tensor_parallel_size=8 \
260
+ --tasks gsm8k_cot_llama_3.1_instruct \
261
+ --fewshot_as_multiturn \
262
+ --apply_chat_template \
263
+ --num_fewshot 8 \
264
+ --batch_size auto
265
+ ```
266
+
267
+ #### Hellaswag
268
+ ```
269
+ lm_eval \
270
+ --model vllm \
271
+ --model_args pretrained="neuralmagic/Meta-Llama-3.1-405B-Instruct-quantized.w8a8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=8 \
272
+ --tasks hellaswag \
273
+ --num_fewshot 10 \
274
+ --batch_size auto
275
+ ```
276
+
277
+ #### Winogrande
278
+ ```
279
+ lm_eval \
280
+ --model vllm \
281
+ --model_args pretrained="neuralmagic/Meta-Llama-3.1-405B-Instruct-quantized.w8a8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=8 \
282
+ --tasks winogrande \
283
+ --num_fewshot 5 \
284
+ --batch_size auto
285
+ ```
286
+
287
+ #### TruthfulQA
288
+ ```
289
+ lm_eval \
290
+ --model vllm \
291
+ --model_args pretrained="neuralmagic/Meta-Llama-3.1-405B-Instruct-quantized.w8a8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=8 \
292
+ --tasks truthfulqa \
293
+ --num_fewshot 0 \
294
+ --batch_size auto
295
+ ```