DoesntKnowAI commited on
Commit
3f485cc
·
verified ·
1 Parent(s): 7746595

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -2
README.md CHANGED
@@ -9,15 +9,46 @@ tags:
9
  - llama-cpp
10
  - gguf-my-repo
11
  ---
 
 
 
 
 
 
 
 
 
12
 
13
  # DoesntKnowAI/NitroOxziT-8B-Q8_0-GGUF
14
  This model was converted to GGUF format from [`DoesntKnowAI/NitroOxziT-8B`](https://huggingface.co/DoesntKnowAI/NitroOxziT-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
15
  Refer to the [original model card](https://huggingface.co/DoesntKnowAI/NitroOxziT-8B) for more details on the model.
16
 
17
 
18
- Don't ask why the weird name, I wanted a modern merge of Hermes and Dolphin so I did it
19
 
20
- Unquantized: [DoesntKnowAI/NitroOxziT-8B](https://huggingface.co/DoesntKnowAI/NitroOxziT-8B)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
  ## Use with llama.cpp
23
  Install llama.cpp through brew (works on Mac and Linux)
 
9
  - llama-cpp
10
  - gguf-my-repo
11
  ---
12
+ # NitroOxziT-8B
13
+
14
+ NitroOxziT-8B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
15
+ * [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B)
16
+ * [cognitivecomputations/Dolphin3.0-Llama3.1-8B](https://huggingface.co/cognitivecomputations/Dolphin3.0-Llama3.1-8B)
17
+
18
+ Don't ask why the weird name, I wanted a modern merge of Hermes and Dolphin so I did it
19
+
20
+ Unquantized: [DoesntKnowAI/NitroOxziT-8B](https://huggingface.co/DoesntKnowAI/NitroOxziT-8B)
21
 
22
  # DoesntKnowAI/NitroOxziT-8B-Q8_0-GGUF
23
  This model was converted to GGUF format from [`DoesntKnowAI/NitroOxziT-8B`](https://huggingface.co/DoesntKnowAI/NitroOxziT-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
24
  Refer to the [original model card](https://huggingface.co/DoesntKnowAI/NitroOxziT-8B) for more details on the model.
25
 
26
 
 
27
 
28
+
29
+
30
+
31
+ ## 🧩 Configuration
32
+
33
+ ```yaml
34
+ slices:
35
+ - sources:
36
+ - model: NousResearch/Hermes-3-Llama-3.1-8B
37
+ layer_range: [0, 32]
38
+ weight: 0.60
39
+ - model: cognitivecomputations/Dolphin3.0-Llama3.1-8B
40
+ layer_range: [0, 32]
41
+ weight: 0.40
42
+ merge_method: slerp
43
+ parameters:
44
+ t:
45
+ - model: NousResearch/Hermes-3-Llama-3.1-8B
46
+ value: 1.0
47
+ - model: cognitivecomputations/Dolphin3.0-Llama3.1-8B
48
+ value: 1.0
49
+ base_model: NousResearch/Hermes-3-Llama-3.1-8B
50
+ dtype: bfloat16
51
+ ```
52
 
53
  ## Use with llama.cpp
54
  Install llama.cpp through brew (works on Mac and Linux)