Tiny Vicuna 1B
This model is a fine-tuned version of TinyLlama on WizardVicuna Dataset. It should be fully compatible with Vicuna-v1.5 series.
This model is easy to iterate on for early experiments!
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 34.76 |
AI2 Reasoning Challenge (25-Shot) | 33.45 |
HellaSwag (10-Shot) | 55.92 |
MMLU (5-Shot) | 25.45 |
TruthfulQA (0-shot) | 33.82 |
Winogrande (5-shot) | 58.41 |
GSM8k (5-shot) | 1.52 |
- Downloads last month
- 2,901
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for Jiayi-Pan/Tiny-Vicuna-1B
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard33.450
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard55.920
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard25.450
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard33.820
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard58.410
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard1.520