IMPORTANT !!

I strongly recommend using the DPO model instead, as it is optimized for better performance and efficiency. This model has been fine-tuned for improved results, making it the preferred choice.

Please refrain from using the SFT model unless you specifically need a base model to build upon. If you require a strong starting point for further fine-tuning, the SFT model can serve that purpose, but for general use, the DPO model is the better option.

Uploaded model

  • Developed by: WasamiKirua
  • License: apache-2.0
  • Finetuned from model : unsloth/mistral-small-24b-instruct-2501-unsloth-bnb-4bit

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

FROM {FILE_LOCATION} TEMPLATE """{{ if .System }}<|im_start|>system {{ .System }}<|im_end|> {{ end }}{{ if .Prompt }}<|im_start|>user {{ .Prompt }}<|im_end|> {{ end }}<|im_start|>assistant {{ .Response }}<|im_end|> """ PARAMETER stop "<|im_start|>" PARAMETER stop "<|im_end|>" PARAMETER temperature 1.5 PARAMETER min_p 0.1

Downloads last month
25
Safetensors
Model size
23.6B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for WasamiKirua/Mistral-Small-24B-new-params-16bit

Finetunes
1 model