Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,13 @@ llama3-8b-spaetzle-v20 is an int4-inc (intel auto-round) quantized merge of the
|
|
17 |
* [VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct)
|
18 |
* [nbeerbower/llama-3-wissenschaft-8B-v2](https://huggingface.co/nbeerbower/llama-3-wissenschaft-8B-v2)
|
19 |
|
20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
|
22 |
## 🧩 Configuration
|
23 |
|
|
|
17 |
* [VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct)
|
18 |
* [nbeerbower/llama-3-wissenschaft-8B-v2](https://huggingface.co/nbeerbower/llama-3-wissenschaft-8B-v2)
|
19 |
|
20 |
+
## Benchmarks
|
21 |
+
The GGUF q4_k_m version achieves on EQ-Bench v2_de 65.7 (171/171 parseable). From [Intel's low bit open llm leaderboard](https://huggingface.co/spaces/Intel/low_bit_open_llm_leaderboard):
|
22 |
+
|
23 |
+
| Type | Model | Average ⬆️ | ARC-c | ARC-e | Boolq | HellaSwag | Lambada | MMLU | Openbookqa | Piqa | Truthfulqa | Winogrande | #Params (B) | #Size (G) |
|
24 |
+
|------|-------------------------------------------|------------|-------|-------|-------|-----------|---------|-------|------------|-------|------------|------------|-------------|-----------|
|
25 |
+
| 🍒 | **cstr/llama3-8b-spaetzle-v20-int4-inc** | **66.43** | **61.77** | **85.4** | **82.75** | **62.79** | **71.73** | **64.17** | **37.4** | **80.41** | **43.21** | **74.66** | **7.04** | **5.74** |
|
26 |
+
|
27 |
|
28 |
## 🧩 Configuration
|
29 |
|