Kaoeiri's picture
Update README.md
e0259a0 verified
metadata
base_model:
  - allura-org/MS-Meadowlark-22B
  - byroneverson/Mistral-Small-Instruct-2409-abliterated
  - crestf411/MS-sunfall-v0.7.0
  - unsloth/Mistral-Small-Instruct-2409
  - unsloth/Mistral-Small-Instruct-2409
  - rAIfle/Acolyte-LORA
  - anthracite-org/magnum-v4-22b
  - Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
  - spow12/ChatWaifu_v2.0_22B
  - ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
  - unsloth/Mistral-Small-Instruct-2409
  - Kaoeiri/Moingooistrial-22B-V1-Lora
  - InferenceIllusionist/SorcererLM-22B
  - TheDrummer/Cydonia-22B-v1.3
  - TheDrummer/Cydonia-22B-v1.1
  - TheDrummer/Cydonia-22B-v1.2
  - Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B
library_name: transformers
tags:
  - mergekit
  - merge

image/png

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using unsloth/Mistral-Small-Instruct-2409 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: anthracite-org/magnum-v4-22b
    parameters:
      weight: 1.0         # Primary model for human-like writing
      density: 0.88       # Solid foundation for clear, balanced text generation
  - model: TheDrummer/Cydonia-22B-v1.3
    parameters:
      weight: 0.26        # Slightly reduced weight for nuanced creativity
      density: 0.7        # Maintains subtle creative influence
  - model: TheDrummer/Cydonia-22B-v1.2
    parameters:
      weight: 0.16        # Adjusted for balanced creativity without interference
      density: 0.68       # Harmonized with other storytelling contributions
  - model: TheDrummer/Cydonia-22B-v1.1
    parameters:
      weight: 0.18        # Refined for precision in roleplay and nuanced content
      density: 0.68       # Ensures stability without overwhelming integration
  - model: Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
    parameters:
      weight: 0.28        # Balanced for storytelling depth without dominance
      density: 0.77       # Smooth integration for narrative-driven content
  - model: allura-org/MS-Meadowlark-22B
    parameters:
      weight: 0.3         # Retains balanced creativity and descriptive clarity
      density: 0.72       # Enhances fluency and narrative cohesion
  - model: spow12/ChatWaifu_v2.0_22B
    parameters:
      weight: 0.27        # Maintains anime-style RP and conversational tone
      density: 0.7        # Preserved for compatibility with other models
  - model: Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B
    parameters:
      weight: 0.2         # Specialized for Japanese linguistic contexts
      density: 0.58       # Fine-tuned for focused coherence
  - model: crestf411/MS-sunfall-v0.7.0
    parameters:
      weight: 0.25        # Subtle tone for dramatic storytelling
      density: 0.74       # Balanced for integration with other narrative styles
  - model: unsloth/Mistral-Small-Instruct-2409+rAIfle/Acolyte-LORA
    parameters:
      weight: 0.24        # Subtle addition for structured content variation
      density: 0.7        # Aligns seamlessly with the overall blend
  - model: InferenceIllusionist/SorcererLM-22B
    parameters:
      weight: 0.23        # Provides stylistic coherence
      density: 0.74       # Supports expressive and balanced outputs
  - model: unsloth/Mistral-Small-Instruct-2409+Kaoeiri/Moingooistrial-22B-V1-Lora
    parameters:
      weight: 0.26        # Mythical and monster storytelling
      density: 0.72       # Balanced for integration with core models
  - model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
    parameters:
      weight: 0.12        # Light roleplay influence to prevent overheating
      density: 0.65       # Keeps roleplay-heavy elements in check
  - model: byroneverson/Mistral-Small-Instruct-2409-abliterated
    parameters:
      weight: 0.15        # Provides raw and unfiltered context
      density: 0.68       # Harmonizes with primary base model

merge_method: dare_ties  # Optimal for diverse and complex model blending
base_model: unsloth/Mistral-Small-Instruct-2409
parameters:
  density: 0.85          # Overall density ensures logical and creative balance
  epsilon: 0.09          # Small step size for smooth blending
  lambda: 1.22           # Adjusted scaling for refined sharpness and coherence
dtype: bfloat16