DraftReasoner-2x7B-MoE-v0.1
Experimental 2-expert MoE merge using mlabonne/Marcoro14-7B-slerp as base.
- Marcoro14-7B-slerp as base.
- OpenHermes-2.5-Mistral-7B as model 0.
- WizardMath-7B-V1.1 as model 1.
Notes
Please evaluate before use in any application pipeline. Activation for Math part of the model would be 'math'
, 'reason'
, 'solve'
, 'count'
.
- Downloads last month
- 5
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.