Update README.md
Browse files
README.md
CHANGED
@@ -21,8 +21,17 @@ Refer to the [original model](https://huggingface.co/stabilityai/stablelm-base-a
|
|
21 |
- total model size is only ~7 GB! (Assuming model size reduction similar to the dolly-v2-12b model)
|
22 |
- this enables low-RAM loading, i.e. Colab :)
|
23 |
|
|
|
24 |
## Basic Usage
|
25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
Install/upgrade `transformers`, `accelerate`, and `bitsandbytes`. For this to work **you must have** `transformers>=4.28.0` and `bitsandbytes>0.37.2`.
|
27 |
|
28 |
```bash
|
@@ -39,8 +48,3 @@ tokenizer = AutoTokenizer.from_pretrained(model_name)
|
|
39 |
|
40 |
model = AutoModelForCausalLM.from_pretrained(model_name)
|
41 |
```
|
42 |
-
You can also use this model as a drop-in replacement in the notebook for the standard sharded models:
|
43 |
-
|
44 |
-
<a href="https://colab.research.google.com/gist/pszemraj/4bd75aa3744f2a02a5c0ee499932b7eb/sharded-stablelm-testing-notebook.ipynb">
|
45 |
-
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
46 |
-
</a>
|
|
|
21 |
- total model size is only ~7 GB! (Assuming model size reduction similar to the dolly-v2-12b model)
|
22 |
- this enables low-RAM loading, i.e. Colab :)
|
23 |
|
24 |
+
|
25 |
## Basic Usage
|
26 |
|
27 |
+
You can use this model as a drop-in replacement in the notebook for the standard sharded models:
|
28 |
+
|
29 |
+
<a href="https://colab.research.google.com/gist/pszemraj/4bd75aa3744f2a02a5c0ee499932b7eb/sharded-stablelm-testing-notebook.ipynb">
|
30 |
+
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
31 |
+
</a>
|
32 |
+
|
33 |
+
### Python
|
34 |
+
|
35 |
Install/upgrade `transformers`, `accelerate`, and `bitsandbytes`. For this to work **you must have** `transformers>=4.28.0` and `bitsandbytes>0.37.2`.
|
36 |
|
37 |
```bash
|
|
|
48 |
|
49 |
model = AutoModelForCausalLM.from_pretrained(model_name)
|
50 |
```
|
|
|
|
|
|
|
|
|
|