4bpw exl2 quant of: https://huggingface.co/nbeerbower/Lyra4-Gutenberg-12B

Lyra4-Gutenberg-12B

Sao10K/MN-12B-Lyra-v4 finetuned on jondurbin/gutenberg-dpo-v0.1.

Method

ORPO Finetuned using an RTX 3090 + 4060 Ti for 3 epochs.

Fine-tune Llama 3 with ORPO

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 19.63
IFEval (0-Shot) 22.12
BBH (3-Shot) 34.24
MATH Lvl 5 (4-Shot) 11.71
GPQA (0-shot) 9.17
MuSR (0-shot) 11.97
MMLU-PRO (5-shot) 28.57
Downloads last month
6
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Jellon/Lyra4-Gutenberg-12B-4bpw

Quantized
(27)
this model

Dataset used to train Jellon/Lyra4-Gutenberg-12B-4bpw

Evaluation results