Highlighted work
Collection
My "greatest hits", sort of
•
11 items
•
Updated
•
4
This is a merge of pre-trained language models created using mergekit.
DeepSeek-R1-Distill-Llama-8B has been merged in at low weight in hopes of increasing the reasoning capability of the resulting model.
Built with Llama.
This model was merged using the task arithmetic merge method using meta-llama/Llama-3.1-8B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: meta-llama/Llama-3.1-8B
dtype: bfloat16
merge_method: task_arithmetic
parameters:
normalize: true
models:
- model: meta-llama/Llama-3.1-8B
- model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
parameters:
weight: 0.1
- model: grimjim/SauerHuatuoSkywork-o1-Llama-3.1-8B
parameters:
weight: 0.9