Update README.md
Browse filesadded WizardLM_evol_instruct_V2_196k details and upstream links
README.md
CHANGED
@@ -1,10 +1,19 @@
|
|
1 |
---
|
2 |
inference: false
|
|
|
|
|
3 |
---
|
4 |
# Model Card for Model ID
|
5 |
|
6 |
|
7 |
-
This is exl2 5.53bpw quant of
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
|
9 |
## Original Model Details
|
10 |
|
|
|
1 |
---
|
2 |
inference: false
|
3 |
+
datasets:
|
4 |
+
- WizardLM/WizardLM_evol_instruct_V2_196k
|
5 |
---
|
6 |
# Model Card for Model ID
|
7 |
|
8 |
|
9 |
+
This is exl2 5.53bpw quant of Vicuna, specifically https://huggingface.co/lmsys/vicuna-13b-v1.5-16k
|
10 |
+
|
11 |
+
More notes on the original model can be found here [lmSys page](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md).
|
12 |
+
|
13 |
+
`python convert.py -i C:\webui\models\deepseek-ai_deepseek-coder-6.7b-instruct
|
14 |
+
-o C:\webui\models\Processed -nr -m deepseek-ai_deepseek-coder-6.7b-instruct_measurement.json
|
15 |
+
-b 2.4 -gr 50 -c "C:\webui\repositories\exllamav2\WizardLM_evol_instruct_V2_196k_0000.parquet"
|
16 |
+
-cf deepseek-ai_deepseek-coder-6.7b-instruct-exl2-2.4bpw -ss 4000`
|
17 |
|
18 |
## Original Model Details
|
19 |
|