Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ Falcon-7b-chat-oasst1 is a chatbot-like model for dialogue generation. It was bu
|
|
21 |
|
22 |
## Model Details
|
23 |
|
24 |
-
The model was fine-tuned in
|
25 |
|
26 |
### Model Date
|
27 |
|
@@ -102,19 +102,12 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
102 |
peft_model_id = "dfurman/falcon-7b-chat-oasst1"
|
103 |
config = PeftConfig.from_pretrained(peft_model_id)
|
104 |
|
105 |
-
bnb_config = BitsAndBytesConfig(
|
106 |
-
load_in_4bit=True,
|
107 |
-
bnb_4bit_use_double_quant=True,
|
108 |
-
bnb_4bit_quant_type="nf4",
|
109 |
-
bnb_4bit_compute_dtype=torch.bfloat16
|
110 |
-
)
|
111 |
-
|
112 |
model = AutoModelForCausalLM.from_pretrained(
|
113 |
config.base_model_name_or_path,
|
114 |
return_dict=True,
|
115 |
-
quantization_config=bnb_config,
|
116 |
device_map={"":0},
|
117 |
trust_remote_code=True,
|
|
|
118 |
)
|
119 |
|
120 |
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
|
|
|
21 |
|
22 |
## Model Details
|
23 |
|
24 |
+
The model was fine-tuned in 8-bit precision using 🤗 `peft` adapters, `transformers`, and `bitsandbytes`. Training relied on a method called "Low Rank Adapters" ([LoRA](https://arxiv.org/pdf/2106.09685.pdf)), specifically the [QLoRA](https://arxiv.org/abs/2305.14314) variant. The run took approximately 3 hours and was executed on a workstation with a single A100-SXM NVIDIA GPU with 37 GB of available memory. See attached [Colab Notebook](https://huggingface.co/dfurman/falcon-7b-chat-oasst1/blob/main/finetune_falcon7b_oasst1_with_bnb_peft.ipynb) for the code and hyperparams used to train the model.
|
25 |
|
26 |
### Model Date
|
27 |
|
|
|
102 |
peft_model_id = "dfurman/falcon-7b-chat-oasst1"
|
103 |
config = PeftConfig.from_pretrained(peft_model_id)
|
104 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
105 |
model = AutoModelForCausalLM.from_pretrained(
|
106 |
config.base_model_name_or_path,
|
107 |
return_dict=True,
|
|
|
108 |
device_map={"":0},
|
109 |
trust_remote_code=True,
|
110 |
+
load_in_8bit=True,
|
111 |
)
|
112 |
|
113 |
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
|