Update README.md
Browse files
README.md
CHANGED
@@ -121,11 +121,13 @@ library_name: transformers
|
|
121 |
pipeline_tag: text-generation
|
122 |
---
|
123 |
|
124 |
-
|
125 |
|
126 |
# NeuralDaredevil-7B-GGUF
|
127 |
- This is quantized version of [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B) created using llama.cpp
|
128 |
-
|
|
|
|
|
129 |
# Model Description
|
130 |
|
131 |
NeuralDaredevil-7B is a DPO fine-tune of [mlabonne/Daredevil-7B](https://huggingface.co/mlabonne/Daredevil-7B) using the [argilla/distilabel-intel-orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs) preference dataset and my DPO notebook from [this article](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac).
|
|
|
121 |
pipeline_tag: text-generation
|
122 |
---
|
123 |
|
124 |
+
|
125 |
|
126 |
# NeuralDaredevil-7B-GGUF
|
127 |
- This is quantized version of [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B) created using llama.cpp
|
128 |
+
|
129 |
+
![](https://i.imgur.com/D80Ua7T.png)
|
130 |
+
|
131 |
# Model Description
|
132 |
|
133 |
NeuralDaredevil-7B is a DPO fine-tune of [mlabonne/Daredevil-7B](https://huggingface.co/mlabonne/Daredevil-7B) using the [argilla/distilabel-intel-orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs) preference dataset and my DPO notebook from [this article](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac).
|