End of training
Browse files- README.md +6 -1
- adapter_model.bin +3 -0
README.md
CHANGED
@@ -2,6 +2,7 @@
|
|
2 |
license: apache-2.0
|
3 |
library_name: peft
|
4 |
tags:
|
|
|
5 |
- generated_from_trainer
|
6 |
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
|
7 |
model-index:
|
@@ -107,7 +108,7 @@ fsdp_config:
|
|
107 |
|
108 |
# mixtral-fc-w-resp-new-format-8e
|
109 |
|
110 |
-
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on
|
111 |
|
112 |
## Model description
|
113 |
|
@@ -153,6 +154,10 @@ The following hyperparameters were used during training:
|
|
153 |
- lr_scheduler_warmup_steps: 10
|
154 |
- num_epochs: 8
|
155 |
|
|
|
|
|
|
|
|
|
156 |
### Framework versions
|
157 |
|
158 |
- PEFT 0.7.0
|
|
|
2 |
license: apache-2.0
|
3 |
library_name: peft
|
4 |
tags:
|
5 |
+
- axolotl
|
6 |
- generated_from_trainer
|
7 |
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
|
8 |
model-index:
|
|
|
108 |
|
109 |
# mixtral-fc-w-resp-new-format-8e
|
110 |
|
111 |
+
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the None dataset.
|
112 |
|
113 |
## Model description
|
114 |
|
|
|
154 |
- lr_scheduler_warmup_steps: 10
|
155 |
- num_epochs: 8
|
156 |
|
157 |
+
### Training results
|
158 |
+
|
159 |
+
|
160 |
+
|
161 |
### Framework versions
|
162 |
|
163 |
- PEFT 0.7.0
|
adapter_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:50021cd81833b84cb4916a8ba2d27ea5ef1151187c170d4c5308d3f2bef7fa83
|
3 |
+
size 109144269
|