Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: pruna-engine
|
3 |
+
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
|
4 |
+
metrics:
|
5 |
+
- memory_disk
|
6 |
+
- memory_inference
|
7 |
+
- inference_latency
|
8 |
+
- inference_throughput
|
9 |
+
- inference_CO2_emissions
|
10 |
+
- inference_energy_consumption
|
11 |
+
---
|
12 |
+
<!-- header start -->
|
13 |
+
<!-- 200823 -->
|
14 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
15 |
+
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
|
16 |
+
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
17 |
+
</a>
|
18 |
+
</div>
|
19 |
+
<!-- header end -->
|
20 |
+
|
21 |
+
[![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI)
|
22 |
+
[![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI)
|
23 |
+
[![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
|
24 |
+
[![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck)
|
25 |
+
|
26 |
+
# Simply make AI models cheaper, smaller, faster, and greener!
|
27 |
+
|
28 |
+
- Give a thumbs up if you like this model!
|
29 |
+
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
|
30 |
+
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
31 |
+
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
|
32 |
+
- Join Pruna AI community on Discord [here](https://discord.com/invite/vb6SmA3hxu) to share feedback/suggestions or get help.
|
33 |
+
|
34 |
+
**Frequently Asked Questions**
|
35 |
+
- ***How does the compression work?*** The model is compressed by using bitsandbytes.
|
36 |
+
- ***How does the model quality change?*** The quality of the model output will slightly degrade.
|
37 |
+
- ***What is the model format?*** We the standard safetensors format.
|
38 |
+
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
39 |
+
|
40 |
+
## Usage
|
41 |
+
These are several general ways to use the DBRX models:
|
42 |
+
* DBRX Base and DBRX Instruct are available for download on HuggingFace (see our Quickstart guide below). This is the HF repository for DBRX Base; DBRX Instruct can be found [here](https://huggingface.co/databricks/dbrx-instruct).
|
43 |
+
* The DBRX model repository can be found on GitHub [here](https://github.com/databricks/dbrx).
|
44 |
+
* DBRX Base and DBRX Instruct are available with [Databricks Foundation Model APIs](https://docs.databricks.com/en/machine-learning/foundation-models/index.html) via both *Pay-per-token* and *Provisioned Throughput* endpoints. These are enterprise-ready deployments.
|
45 |
+
* For more information on how to fine-tune using LLM-Foundry, please take a look at our LLM pretraining and fine-tuning [documentation](https://github.com/mosaicml/llm-foundry/blob/main/scripts/train/README.md).
|
46 |
+
|
47 |
+
|
48 |
+
## Quickstart Guide
|
49 |
+
|
50 |
+
Getting started with DBRX models is easy with the `transformers` library. The model requires ~264GB of RAM and the following packages:
|
51 |
+
|
52 |
+
```bash
|
53 |
+
pip install "transformers>=4.39.2" "tiktoken>=0.6.0"
|
54 |
+
```
|
55 |
+
|
56 |
+
If you'd like to speed up download time, you can use the `hf_transfer` package as described by Huggingface [here](https://huggingface.co/docs/huggingface_hub/en/guides/download#faster-downloads).
|
57 |
+
```bash
|
58 |
+
pip install hf_transfer
|
59 |
+
export HF_HUB_ENABLE_HF_TRANSFER=1
|
60 |
+
```
|
61 |
+
|
62 |
+
You will need to request access to this repository to download the model. Once this is granted,
|
63 |
+
[obtain an access token](https://huggingface.co/docs/hub/en/security-tokens) with `read` permission, and supply the token below.
|
64 |
+
|
65 |
+
### Run the model on multiple GPUs:
|
66 |
+
```python
|
67 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
68 |
+
import torch
|
69 |
+
|
70 |
+
tokenizer = AutoTokenizer.from_pretrained("PrunaAI/dbrx-instruct-bnb-4bit", trust_remote_code=True, token="hf_YOUR_TOKEN")
|
71 |
+
model = AutoModelForCausalLM.from_pretrained("PrunaAI/dbrx-instruct-bnb-4bit", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True, token="hf_YOUR_TOKEN")
|
72 |
+
|
73 |
+
input_text = "What does it take to build a great LLM?"
|
74 |
+
messages = [{"role": "user", "content": input_text}]
|
75 |
+
input_ids = tokenizer.apply_chat_template(messages, return_dict=True, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
|
76 |
+
|
77 |
+
outputs = model.generate(**input_ids, max_new_tokens=200)
|
78 |
+
print(tokenizer.decode(outputs[0]))
|
79 |
+
```
|
80 |
+
|
81 |
+
## Credits & License
|
82 |
+
|
83 |
+
The license of the smashed model follows the license of the original model. Please check the license of the original model databricks/dbrx-instruct before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
|
84 |
+
|
85 |
+
## Want to compress other models?
|
86 |
+
|
87 |
+
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
|
88 |
+
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|