Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ inference: False
|
|
16 |
|
17 |
This is a sharded checkpoint (with ~4GB shards) of the `stabilityai/stablelm-base-alpha-7b` model **in `8bit` precision** using `bitsandbytes`.
|
18 |
|
19 |
-
Refer to the [original model](https://huggingface.co/stabilityai/stablelm-
|
20 |
|
21 |
- total model size is only ~7 GB!
|
22 |
- this enables low-RAM loading, i.e. Colab :)
|
@@ -24,12 +24,13 @@ Refer to the [original model](https://huggingface.co/stabilityai/stablelm-base-a
|
|
24 |
|
25 |
## Basic Usage
|
26 |
|
27 |
-
You can use this model as a drop-in replacement in the notebook for the standard sharded models:
|
28 |
-
|
29 |
<a href="https://colab.research.google.com/gist/pszemraj/4bd75aa3744f2a02a5c0ee499932b7eb/sharded-stablelm-testing-notebook.ipynb">
|
30 |
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
31 |
</a>
|
32 |
|
|
|
|
|
|
|
33 |
### Python
|
34 |
|
35 |
Install/upgrade `transformers`, `accelerate`, and `bitsandbytes`. For this to work **you must have** `transformers>=4.28.0` and `bitsandbytes>0.37.2`.
|
|
|
16 |
|
17 |
This is a sharded checkpoint (with ~4GB shards) of the `stabilityai/stablelm-base-alpha-7b` model **in `8bit` precision** using `bitsandbytes`.
|
18 |
|
19 |
+
Refer to the [original model](https://huggingface.co/stabilityai/stablelm-tuned-alpha-7b) for all details w.r.t. to the model. For more info on loading 8bit models, refer to the [example repo](https://huggingface.co/ybelkada/bloom-1b7-8bit) and/or the `4.28.0` [release info](https://github.com/huggingface/transformers/releases/tag/v4.28.0).
|
20 |
|
21 |
- total model size is only ~7 GB!
|
22 |
- this enables low-RAM loading, i.e. Colab :)
|
|
|
24 |
|
25 |
## Basic Usage
|
26 |
|
|
|
|
|
27 |
<a href="https://colab.research.google.com/gist/pszemraj/4bd75aa3744f2a02a5c0ee499932b7eb/sharded-stablelm-testing-notebook.ipynb">
|
28 |
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
29 |
</a>
|
30 |
|
31 |
+
You can use this model as a drop-in replacement in the notebook for the standard sharded models.
|
32 |
+
|
33 |
+
|
34 |
### Python
|
35 |
|
36 |
Install/upgrade `transformers`, `accelerate`, and `bitsandbytes`. For this to work **you must have** `transformers>=4.28.0` and `bitsandbytes>0.37.2`.
|