EVA Qwen2.5 7B 0.1

A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-7B on mixture of synthetic and natural data.
It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.

Version 0.1 notes:
Dataset was deduped and cleaned from version 0.0, and learning rate was adjusted. Resulting model seems to be stabler, and 0.0 problems with handling short inputs and min_p sampling seem to be mostly gone.
Will be retrained once more, because this run crashed around e1.2 (out of 3) (thanks, DeepSpeed, really appreciate it), and it's still somewhat undertrained as a result.

Prompt format is ChatML.


Recommended sampler values:

  • Temperature: 0.87
  • Top-P: 0.81
  • Repetition Penalty: 1.03

Model appears to prefer lower temperatures (at least 0.9 and lower). Min-P seems to work now, as well.

Recommended SillyTavern presets (via CalamitousFelicitousness):


Training data:

  • Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's card for details.
  • Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.
  • A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe
  • A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe
  • A cleaned subset (~3k rows) of shortstories_synthlabels by Auri
  • Synthstruct and SynthRP datasets by Epiculous

Training time and hardware:

  • 2 days on 4x3090Ti (locally)

Model was trained by Kearm and Auri.

Special thanks:

  • to Gryphe, Lemmy, Kalomaze, Nopm and Epiculous for the data
  • to Alpindale for helping with FFT config for Qwen2.5
  • and to InfermaticAI's community for their continued support for our endeavors
Downloads last month
178
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1

Base model

Qwen/Qwen2.5-7B
Finetuned
(201)
this model
Merges
16 models
Quantizations
12 models

Datasets used to train EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1

Collection including EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1