merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using MrRobotoAI/10 as a base.

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: MrRobotoAI/2+Epiculous/Mika-7B-LoRA
  - model: MrRobotoAI/2+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
  - model: MrRobotoAI/2+jeiku/Synthetic_Soul_1k_Mistral_128
  - model: MrRobotoAI/2+kamrr/llama-3-8b_dolly_lora
  - model: MrRobotoAI/2+mpasila/Llama-3-LiPPA-LoRA-8B
  - model: MrRobotoAI/2+NewEden/Control-8B-V.2-Erebus-Lora
  - model: MrRobotoAI/2+nothingiisreal/llama3-8B-DWP-lora
  - model: MrRobotoAI/2+Ozaii/Wali-8B-Uncensored-Model
  - model: MrRobotoAI/2+ResplendentAI/Aura_Llama3
  - model: MrRobotoAI/2+ResplendentAI/BlueMoon_Llama3
  - model: MrRobotoAI/2+ResplendentAI/Llama3_RP_ORPO_LoRA
  - model: MrRobotoAI/2+ResplendentAI/Luna_Llama3
  - model: MrRobotoAI/2+ResplendentAI/NoWarning_Llama3
  - model: MrRobotoAI/2+ResplendentAI/Theory_of_Mind_Llama3
  - model: MrRobotoAI/2+erbacher/zephyr-convsearch-7b-v2
  - model: MrRobotoAI/2+erbacher/zephyr-rag-agent
  - model: MrRobotoAI/2+erbacher/zephyr-rag-agent-webgpt
merge_method: model_stock
base_model: MrRobotoAI/1
normalize: false
dtype: float16
Downloads last month
43
Safetensors
Model size
8.03B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for MrRobotoAI/3