NeuralReyna-Mini-1.8B-v0.3

Reyna image

Description

Taken aloobun/Reyna-Mini-1.8B-v0.2 and further fine-tuned it using DPO using the argilla/OpenHermes2.5-dpo-binarized-alpha.

This model has capabilities in coding, math, science, roleplay, and function calling.

This model was trained on OpenAI's ChatML prompt format.

Quants

HQQ - https://huggingface.co/twoxfh/NeuralReyna-Mini-hqq-1.8B-v0.3

Evaluation

Coming soon

Contributions

Thanks to @aloobun and @Locutusque for their contributions to this model.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 41.77
AI2 Reasoning Challenge (25-Shot) 35.58
HellaSwag (10-Shot) 61.13
MMLU (5-Shot) 44.22
TruthfulQA (0-shot) 41.99
Winogrande (5-shot) 60.93
GSM8k (5-shot) 6.75
Downloads last month
186
Safetensors
Model size
1.84B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for M4-ai/NeuralReyna-Mini-1.8B-v0.3

Quantizations
3 models

Datasets used to train M4-ai/NeuralReyna-Mini-1.8B-v0.3

Evaluation results