image/png

🧪 Just Another Model Experiment

This is one of many experimental iterations I'm sharing publicly while I mess around with training parameters and ideas. It's not a "real" release - just me being transparent about my learning process. Feel free to look under the hood, but don't expect anything production-ready!

Mistral-Nemo-Moderne-12B-FFT-experimental

Mahou-1.5-mistral-nemo-12B-lorablated finetuned on gutenberg2-dpo and gutenberg-moderne-dpo.

This model has erratic behavior and poor performance

Method

ORPO tuned with 8x A100 for 1.5 epochs.

This was a full finetune. I think the issues with the model can be chalked up to conflicts with Mistral Instruct and ChatML.

Downloads last month
11
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for nbeerbower/Mistral-Nemo-Moderne-12B-FFT-experimental

Finetuned
(14)
this model
Merges
1 model
Quantizations
2 models

Datasets used to train nbeerbower/Mistral-Nemo-Moderne-12B-FFT-experimental