--- tags: - fp8 - vllm license: apache-2.0 license_link: https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md language: - en base_model: ibm-granite/granite-3.1-8b-base library_name: transformers --- # granite-3.1-8b-base-FP8-dynamic ## Model Overview - **Model Architecture:** granite-3.1-8b-base - **Input:** Text - **Output:** Text - **Model Optimizations:** - **Weight quantization:** FP8 - **Activation quantization:** FP8 - **Release Date:** 1/8/2025 - **Version:** 1.0 - **Model Developers:** Neural Magic Quantized version of [ibm-granite/granite-3.1-8b-base](https://huggingface.co/ibm-granite/granite-3.1-8b-base). It achieves an average score of 66.94 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 67.44. ### Model Optimizations This model was obtained by quantizing the weights and activations of [ibm-granite/granite-3.1-8b-base](https://huggingface.co/ibm-granite/granite-3.1-8b-base) to FP8 data type, ready for inference with vLLM >= 0.5.2. This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. Only the weights and activations of the linear operators within transformers blocks are quantized. ## Deployment ### Use with vLLM This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below. ```python from transformers import AutoTokenizer from vllm import LLM, SamplingParams max_model_len, tp_size = 4096, 1 model_name = "neuralmagic/granite-3.1-8b-base-FP8-dynamic" tokenizer = AutoTokenizer.from_pretrained(model_name) llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True) sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id]) messages_list = [ [{"role": "user", "content": "Who are you? Please respond in pirate speak!"}], ] prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list] outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params) generated_text = [output.outputs[0].text for output in outputs] print(generated_text) ``` vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details. ## Creation This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
Model Creation Code ```bash python quantize.py --model_id ibm-granite/granite-3.1-8b-base --save_path "output_dir/" ``` ```python import argparse from transformers import AutoModelForCausalLM, AutoTokenizer from llmcompressor.modifiers.quantization import QuantizationModifier from llmcompressor.transformers import oneshot import os def main(): parser = argparse.ArgumentParser(description='Quantize a transformer model to FP8') parser.add_argument('--model_id', type=str, required=True, help='The model ID from HuggingFace (e.g., "meta-llama/Meta-Llama-3-8B-base")') parser.add_argument('--save_path', type=str, default='.', help='Custom path to save the quantized model. If not provided, will use model_name-FP8-dynamic') args = parser.parse_args() # Load model model = AutoModelForCausalLM.from_pretrained( args.model_id, device_map="auto", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(args.model_id) # Configure the quantization algorithm and scheme recipe = QuantizationModifier( targets="Linear", scheme="FP8_DYNAMIC", ignore=["lm_head"] ) # Apply quantization oneshot(model=model, recipe=recipe) save_path = os.path.join(args.save_path, args.model_id.split("/")[1] + "-FP8-dynamic") os.makedirs(save_path, exist_ok=True) # Save to disk in compressed-tensors format model.save_pretrained(save_path) tokenizer.save_pretrained(save_path) print(f"Model and tokenizer saved to: {save_path}") if __name__ == "__main__": main() ```
## Evaluation The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard), OpenLLM Leaderboard [V2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/) and on [HumanEval](https://github.com/neuralmagic/evalplus), using the following commands:
Evaluation Commands OpenLLM Leaderboard V1: ``` lm_eval \ --model vllm \ --model_args pretrained="neuralmagic/granite-3.1-8b-base-FP8-dynamic",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \ --tasks openllm \ --write_out \ --batch_size auto \ --output_path output_dir \ --show_config ``` #### HumanEval ##### Generation ``` python3 codegen/generate.py \ --model neuralmagic/granite-3.1-8b-base-FP8-dynamic \ --bs 16 \ --temperature 0.2 \ --n_samples 50 \ --root "." \ --dataset humaneval ``` ##### Sanitization ``` python3 evalplus/sanitize.py \ humaneval/neuralmagic--granite-3.1-8b-base-FP8-dynamic_vllm_temp_0.2 ``` ##### Evaluation ``` evalplus.evaluate \ --dataset humaneval \ --samples humaneval/neuralmagic--granite-3.1-8b-base-FP8-dynamic_vllm_temp_0.2-sanitized ```
### Accuracy
Category Metric ibm-granite/granite-3.1-8b-base neuralmagic/granite-3.1-8b-base-FP8-dynamic Recovery (%)
OpenLLM V1 ARC-Challenge (Acc-Norm, 25-shot) 64.68 64.16 99.20
GSM8K (Strict-Match, 5-shot) 60.88 58.45 95.99
HellaSwag (Acc-Norm, 10-shot) 83.52 83.46 99.93
MMLU (Acc, 5-shot) 63.33 63.35 100.03
TruthfulQA (MC2, 0-shot) 51.33 51.56 100.45
Winogrande (Acc, 5-shot) 80.90 80.66 99.70
Average Score 67.44 66.94 99.26
Coding HumanEval Pass@1 44.10 44.80 101.59
## Inference Performance This model achieves up to 1.5x speedup in single-stream deployment and up to 1.1x speedup in multi-stream asynchronous deployment on L40 GPUs. The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.6.6.post1, and [GuideLLM](https://github.com/neuralmagic/guidellm).
Benchmarking Command ``` guidellm --model neuralmagic/granite-3.1-8b-base-FP8-dynamic --target "http://localhost:8000/v1" --data-type emulated --data "prompt_tokens=,generated_tokens=" --max seconds 360 --backend aiohttp_server ```
### Single-stream performance (measured with vLLM version 0.6.6.post1)
Latency (s)
GPU class Model Speedup Code Completion
prefill: 256 tokens
decode: 1024 tokens
Docstring Generation
prefill: 768 tokens
decode: 128 tokens
Code Fixing
prefill: 1024 tokens
decode: 1024 tokens
RAG
prefill: 1024 tokens
decode: 128 tokens
Instruction Following
prefill: 256 tokens
decode: 128 tokens
Multi-turn Chat
prefill: 512 tokens
decode: 256 tokens
Large Summarization
prefill: 4096 tokens
decode: 512 tokens
L40 granite-3.1-8b-base 25.1 3.2 25.3 3.2 3.2 6.3 13.4
granite-3.1-8b-base-FP8-dynamic
(this model)
1.47 16.8 2.2 17.1 2.2 2.1 4.2 9.3
granite-3.1-8b-base-quantized.w4a16 2.72 8.9 1.2 9.2 1.2 1.1 2.3 5.3
### Multi-stream asynchronous performance (measured with vLLM version 0.6.6.post1)
Maximum Throughput (Queries per Second)
GPU class Model Speedup Code Completion
prefill: 256 tokens
decode: 1024 tokens
Docstring Generation
prefill: 768 tokens
decode: 128 tokens
Code Fixing
prefill: 1024 tokens
decode: 1024 tokens
RAG
prefill: 1024 tokens
decode: 128 tokens
Instruction Following
prefill: 256 tokens
decode: 128 tokens
Multi-turn Chat
prefill: 512 tokens
decode: 256 tokens
Large Summarization
prefill: 4096 tokens
decode: 512 tokens
L40 granite-3.1-8b-base 1.4 7.8 1.1 6.2 15.5 6.0 0.7
granite-3.1-8b-base-FP8-dynamic
(this model)
1.12 2.1 7.4 1.3 5.9 15.3 6.9 0.8
granite-3.1-2b-base-quantized.w4a16 1.29 2.4 8.9 1.4 7.1 17.8 7.8 1.0