Update README.md
Browse files
README.md
CHANGED
@@ -35,7 +35,7 @@ An initial foray into the world of fine-tuning. The goal of this release was to
|
|
35 |
* [Quantized - Limited VRAM Option (197mb)](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO-GGUF/resolve/main/mistral-7b-mmproj-v1.5-Q4_1.gguf?download=true)
|
36 |
* [Unquantized - Premium Option / Best Quality (596mb)](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO-GGUF/resolve/main/mmproj-model-f16.gguf?download=true)
|
37 |
|
38 |
-
Select the gguf file of your choice in
|
39 |
<img src="https://i.imgur.com/x8vqH29.png" width="425"/>
|
40 |
|
41 |
## Prompt Format
|
|
|
35 |
* [Quantized - Limited VRAM Option (197mb)](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO-GGUF/resolve/main/mistral-7b-mmproj-v1.5-Q4_1.gguf?download=true)
|
36 |
* [Unquantized - Premium Option / Best Quality (596mb)](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO-GGUF/resolve/main/mmproj-model-f16.gguf?download=true)
|
37 |
|
38 |
+
Select the gguf file of your choice in [Koboldcpp](https://github.com/LostRuins/koboldcpp/releases/) as usual, then make sure to choose the mmproj file above in the LLaVA mmproj field of the model submenu:
|
39 |
<img src="https://i.imgur.com/x8vqH29.png" width="425"/>
|
40 |
|
41 |
## Prompt Format
|