⚛️ Gluon-8B
This is a merge of pre-trained language models created using mergekit.
Quantizations
GGUF
Models Merged
The following models were included in the merge:
- NeverSleep/Lumimaid-v0.2-8B + kloodia/lora-8b-medic
- nothingiisreal/L3.1-8B-Celeste-V1.5 + kloodia/lora-8b-bio
- mlabonne/Hermes-3-Llama-3.1-8B-lorablated + Azazelle/RP_Format_QuoteAsterisk_Llama3
- vicgalle/Configurable-Llama-3.1-8B-Instruct + kloodia/lora-8b-physic
Configuration
The following YAML configuration was used to produce this model:
models:
- model: mlabonne/Hermes-3-Llama-3.1-8B-lorablated+Azazelle/RP_Format_QuoteAsterisk_Llama3
- model: vicgalle/Configurable-Llama-3.1-8B-Instruct+kloodia/lora-8b-physic
- model: NeverSleep/Lumimaid-v0.2-8B+kloodia/lora-8b-medic
- model: nothingiisreal/L3.1-8B-Celeste-V1.5+kloodia/lora-8b-bio
merge_method: model_stock
base_model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
normalize: true
int8_mask: true
dtype: float16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 23.66 |
IFEval (0-Shot) | 50.53 |
BBH (3-Shot) | 30.34 |
MATH Lvl 5 (4-Shot) | 12.54 |
GPQA (0-shot) | 8.28 |
MuSR (0-shot) | 9.09 |
MMLU-PRO (5-shot) | 31.20 |
- Downloads last month
- 8
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for rmdhirr/Gluon-8B
Merge model
this model
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard50.530
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard30.340
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard12.540
- acc_norm on GPQA (0-shot)Open LLM Leaderboard8.280
- acc_norm on MuSR (0-shot)Open LLM Leaderboard9.090
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard31.200