sillyrp-7b / README.md
nonetrix's picture
Update README.md
a69bee4 verified
metadata
license: apache-2.0
language:
  - en
tags:
  - not-for-all-audiences

Silly RP 7B

First time with merges really, basically experimenting and seeing what works. Seems solid in my tests so far, but no guarantees on quality – give it a shot and share your feedback! Eager to hear how others like it. Honestly, I'm still learning the ropes, like the benefits of different merge methods :P

Extra info

  • Chat template: ChatML

base_model:

  • tavtav/eros-7b-test
  • NousResearch/Nous-Hermes-2-Mistral-7B-DPO
  • maywell/Synatra-7B-v0.3-RP
  • NeverSleep/Noromaid-7B-0.4-DPO
  • cogbuji/Mr-Grammatology-clinical-problems-Mistral-7B-0.5 library_name: transformers tags:
  • mergekit
  • merge

output

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the task arithmetic merge method using NeverSleep/Noromaid-7B-0.4-DPO as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: NeverSleep/Noromaid-7B-0.4-DPO
models:
      - model: maywell/Synatra-7B-v0.3-RP
        parameters:
          weight: 0.2
      - model: tavtav/eros-7b-test
        parameters:
          weight: 0.2
      - model: cogbuji/Mr-Grammatology-clinical-problems-Mistral-7B-0.5
        parameters:
          weight: 0.2
      - model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
        parameters:
          weight: 0.2
merge_method: task_arithmetic
parameters:
      weight: 0.17
dtype: float16
random_seed: 694201337567099116663322537