license: mit
datasets:
- facebook/empathetic_dialogues
language:
- en
base_model: alignment-handbook/zephyr-7b-sft-full
widget:
- example_title: Pirate!
messages:
- role: system
content: >-
You are a friendly assistant, who provides empathetic responses to the
user. The input contains previous turn of the dialog, where each
utterance is prefaced with tags <|user>, or <|assistant|>. Be
empathetic and precise. Make sure to give responses that make the
dialogue flow. Avoid repeating the prompt. Please respond creatively
and expressively to make the responses longer. You can offer advice.
- role: user
content: >-
Yeah about 10 years ago I had a horrifying experience. It was 100%
their fault but they hit the water barrels and survived. They had no
injuries but they almost ran me off the road.
- role: assistant
content: Did you suffer any injuries?
- role: user
content: >-
No I wasn't hit. It turned out they were drunk. I felt guilty but
realized it was his fault.
output:
text: >-
That's good that you didn't get hurt. I hope they got in trouble for
driving drunk.
pipeline_tag: text-generation
model-index:
- name: justtherightsize/zephyr-7b-sft-full124_d270
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: Open LLM Leaderboard
type: various
config: various
split: various
args:
num_few_shot: 5
metrics:
- type: acc
name: accuracy
value: 0.2665
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
name: accuracy
value: 58.38
source:
name: MMLU
url: https://github.com/huggingface/lm-evaluation-harness.git
Model Card for zephyr-7b-sft-full124_d270
This model paricipated in multi-turn dialogues and responses empathetically.
Model Description
We propose a data-driven solution for Empathetic Response Generation with LLMs: aligning LLMs via preference optimization algorithms. First, we build a preference dataset using the benchmark dataset EmpatheticDialogues (Rashkin et al., 2019). It contains short multi-turn human-to-human dialogues grounded by emotion labels. We leverage this emotion grounding to sample dialog completions labeled with polar opposite emotions using Plutchik’s wheel (Plutchik, 2001) such that each prompt is paired with preferred and non-preferred completions. We then fine-tune a foundational LLM using Direct Preference Optimization (DPO) (Rafailov et al., 2024) to generate responses aligned with the preferred candidate response.
- Developed by: TBA
- Model type: Autoregressive Encoder-Decoder
- Language(s): en
- Finetuned from: alignment-handbook/zephyr-7b-sft-full
Sources
- Repository: https://github.com/justtherightsize/empo
- (non-anonymized) Paper preprint: https://arxiv.org/abs/2406.19071
Usage
TODO
Out-of-Scope Usage
Note that fine-tuning on the EmpatheticDialogues caused some specialization.
Training
TODO
Cite
TBA, now please cite the non-anonymized preprint