TinyLlama 1.1B Chat 1.0 - DeepSparse

This repo contains model files for TinyLlama 1.1B Chat optimized for DeepSparse, a CPU inference runtime for sparse models.

This model was quantized and pruned with SparseGPT, using SparseML.

Inference

Install DeepSparse LLM for fast inference on CPUs:

pip install deepsparse-nightly[llm]

Run in a Python pipeline:

from deepsparse import TextGeneration

prompt = "How to make banana bread?"
formatted_prompt =  f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"

model = TextGeneration(model_path="hf:nm-testing/TinyLlama-1.1B-Chat-v1.0-pruned50-quant-ds-v2")
print(model(formatted_prompt, max_new_tokens=200).generations[0].text)

"""
Sure! Here's a recipe for making banana bread:

Ingredients:
- 1 banana
- 1 cup of all-purpose flour
- 1 cup of cocoa powder
- 1 cup of sugar
- 1 cup of melted coconut oil
- 1 cup of salt

Instructions:
1. Preheat the oven to 375°F.
2. Add the banana to the flour mixture, and mix until smooth.
3. Add the cocoa powder, sugar, melted coconut oil, salt, and mix until smooth.
4. Add the melted coconut oil, salt, and mix until smooth.
5. Add the melted coconut oil, salt, and mix until smooth.
6. Add the banana, salt, and mix until smooth.


"""

Prompt template

<|im_start|>user\n
{prompt}<|im_end|>\n
<|im_start|>assistant\n

Sparsification

For details on how this model was sparsified, see the recipe.yaml in this repo and follow the instructions below.

git clone https://github.com/neuralmagic/sparseml
pip install -e "sparseml[transformers]"
python sparseml/src/sparseml/transformers/sparsification/obcq/obcq.py TinyLlama/TinyLlama-1.1B-Chat-v1.0 open_platypus --precision float32  --recipe recipe.yaml --save True
python sparseml/src/sparseml/transformers/sparsification/obcq/export.py --task text-generation --model_path obcq_deployment 
cp deployment/model.onnx deployment/model-orig.onnx

Run this kv-cache injection to speed up the model at inference by caching the Key and Value states:

import os
import onnx
from sparseml.exporters.kv_cache_injector import KeyValueCacheInjector
input_file = "deployment/model-orig.onnx"
output_file = "deployment/model.onnx"
model = onnx.load(input_file, load_external_data=False)
model = KeyValueCacheInjector(model_path=os.path.dirname(input_file)).apply(model)
onnx.save(model, output_file)
print(f"Modified model saved to: {output_file}")

Follow the instructions on our One Shot With SparseML page for a step-by-step guide for performing one-shot quantization of large language models.

Slack

For further support, and discussions on these models and AI in general, join Neural Magic's Slack Community

Downloads last month
4
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for nm-testing/TinyLlama-1.1B-Chat-v1.0-pruned50-quant-ds-v2

Quantized
(77)
this model