Mistral-v0.2-orpo

image/jpeg

Mistral-v0.2-orpo is a fine-tuned version of the new Mistral-7B-v0.2 on argilla/distilabel-capybara-dpo-7k-binarized preference dataset using Odds Ratio Preference Optimization (ORPO). The model has been trained for 1 epoch. It took almost 8 hours on A100 GPU.

πŸ’₯ LazyORPO

This model has been trained using LazyORPO. A colab notebook that makes the training process much easier. Based on ORPO paper

image/png

🎭 What is ORPO?

Odds Ratio Preference Optimization (ORPO) proposes a new method to train LLMs by combining SFT and Alignment into a new objective (loss function), achieving state of the art results. Some highlights of this techniques are:

  • 🧠 Reference model-free β†’ memory friendly
  • πŸ”„ Replaces SFT+DPO/PPO with 1 single method (ORPO)
  • πŸ† ORPO Outperforms SFT, SFT+DPO on PHI-2, Llama 2, and Mistral
  • πŸ“Š Mistral ORPO achieves 12.20% on AlpacaEval2.0, 66.19% on IFEval, and 7.32 on MT-Bench out Hugging Face Zephyr Beta

πŸ’» Usage

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

torch.set_default_device("cuda")

model = AutoModelForCausalLM.from_pretrained("abideen/Mistral-v0.2-orpo", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("abideen/Mistral-v0.2-orpo", trust_remote_code=True)

inputs = tokenizer('''
   """
   Write a detailed analogy between mathematics and a lighthouse.
   """''', return_tensors="pt", return_attention_mask=False)

outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)

πŸ† Evaluation

COMING SOON

Downloads last month
156
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for abideen/Mistral-v2-orpo

Quantizations
1 model

Dataset used to train abideen/Mistral-v2-orpo

Spaces using abideen/Mistral-v2-orpo 6