image/png

Mistral-Nemo-Gutenberg-Doppel-12B

mistralai/Mistral-Nemo-Instruct-2407 finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.

Method

ORPO tuned with an RTX 3090 for 3 epochs.

Fine-tune Llama 3 with ORPO

Downloads last month
62
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B

Finetuned
(55)
this model
Quantizations
3 models

Datasets used to train nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B