pszemraj commited on
Commit
99e25a9
·
1 Parent(s): 5d92ccb
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -9,12 +9,13 @@ tags:
9
  - sharded
10
  - 8-bit
11
  - quantized
12
- inference: False
 
13
  ---
14
 
15
  # stablelm-tuned-alpha-7b-sharded-8bit
16
 
17
- This is a sharded checkpoint (with ~4GB shards) of the `stabilityai/stablelm-base-alpha-7b` model **in `8bit` precision** using `bitsandbytes`.
18
 
19
  Refer to the [original model](https://huggingface.co/stabilityai/stablelm-tuned-alpha-7b) for all details w.r.t. to the model. For more info on loading 8bit models, refer to the [example repo](https://huggingface.co/ybelkada/bloom-1b7-8bit) and/or the `4.28.0` [release info](https://github.com/huggingface/transformers/releases/tag/v4.28.0).
20
 
@@ -44,8 +45,8 @@ Load the model. As it is serialized in 8bit you don't need to do anything specia
44
  ```python
45
  from transformers import AutoTokenizer, AutoModelForCausalLM
46
 
47
- model_name = "ethzanalytics/stablelm-base-alpha-7b-sharded-8bit"
48
  tokenizer = AutoTokenizer.from_pretrained(model_name)
49
 
50
  model = AutoModelForCausalLM.from_pretrained(model_name)
51
- ```
 
9
  - sharded
10
  - 8-bit
11
  - quantized
12
+ - tuned
13
+ inference: false
14
  ---
15
 
16
  # stablelm-tuned-alpha-7b-sharded-8bit
17
 
18
+ This is a sharded checkpoint (with ~4GB shards) of the `stabilityai/stablelm-tuned-alpha-7b` model **in `8bit` precision** using `bitsandbytes`.
19
 
20
  Refer to the [original model](https://huggingface.co/stabilityai/stablelm-tuned-alpha-7b) for all details w.r.t. to the model. For more info on loading 8bit models, refer to the [example repo](https://huggingface.co/ybelkada/bloom-1b7-8bit) and/or the `4.28.0` [release info](https://github.com/huggingface/transformers/releases/tag/v4.28.0).
21
 
 
45
  ```python
46
  from transformers import AutoTokenizer, AutoModelForCausalLM
47
 
48
+ model_name = "ethzanalytics/stablelm-tuned-alpha-7b-sharded-8bit"
49
  tokenizer = AutoTokenizer.from_pretrained(model_name)
50
 
51
  model = AutoModelForCausalLM.from_pretrained(model_name)
52
+ ```