Update README.md
Browse files
README.md
CHANGED
@@ -34,6 +34,10 @@ In short, palmer is now half the size, twice the speed and almost same overall p
|
|
34 |
|
35 |
As all palmer models, the model is biased to respond to answers without using any specific prompt, feel free to further fine-tune it for your specific use case.
|
36 |
|
|
|
|
|
|
|
|
|
37 |
| Model | MMLU | ARC-C | HellaSwag | PIQA | Winogrande | Average |
|
38 |
|--------------------------------|-------|-------|-----------|--------|------------|---------|
|
39 |
| smollm-360m | 0.2537| 0.3626| 0.5350 | 0.7116 | 0.5659 | 0.4858 |
|
|
|
34 |
|
35 |
As all palmer models, the model is biased to respond to answers without using any specific prompt, feel free to further fine-tune it for your specific use case.
|
36 |
|
37 |
+
#### benchmarks
|
38 |
+
|
39 |
+
These are zero-shot evaluations performed on current state-of-the-art language models.
|
40 |
+
|
41 |
| Model | MMLU | ARC-C | HellaSwag | PIQA | Winogrande | Average |
|
42 |
|--------------------------------|-------|-------|-----------|--------|------------|---------|
|
43 |
| smollm-360m | 0.2537| 0.3626| 0.5350 | 0.7116 | 0.5659 | 0.4858 |
|