NeuralReyna-Mini-1.8B-v0.3
Description
Taken aloobun/Reyna-Mini-1.8B-v0.2 and further fine-tuned it using DPO using the argilla/OpenHermes2.5-dpo-binarized-alpha.
This model has capabilities in coding, math, science, roleplay, and function calling.
This model was trained on OpenAI's ChatML prompt format.
Quants
HQQ - https://huggingface.co/twoxfh/NeuralReyna-Mini-hqq-1.8B-v0.3
Evaluation
Coming soon
Contributions
Thanks to @aloobun and @Locutusque for their contributions to this model.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 41.77 |
AI2 Reasoning Challenge (25-Shot) | 35.58 |
HellaSwag (10-Shot) | 61.13 |
MMLU (5-Shot) | 44.22 |
TruthfulQA (0-shot) | 41.99 |
Winogrande (5-shot) | 60.93 |
GSM8k (5-shot) | 6.75 |
- Downloads last month
- 186
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for M4-ai/NeuralReyna-Mini-1.8B-v0.3
Datasets used to train M4-ai/NeuralReyna-Mini-1.8B-v0.3
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard35.580
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard61.130
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard44.220
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard41.990
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard60.930
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard6.750