KRONOS V1 P3
This is a merge of Meta Llama 3.1 Instruct and REILIX's "750MB" LORA, created using llm-tools.
The primary purpose of this model is to be merged into other models in the same family using the TIES merge method.
Creating quants for this is entirely unnecessary.
Merge Details
Configuration
The following Bash command was used to produce this model:
python /llm-tools/merge-lora.py -m unsloth/Meta-Llama-3.1-8B-Instruct -l REILX/Llama-3-8B-Instruct-750Mb-lora
Open LLM Leaderboard Evaluation Results
Detailed results can be found here! Summarized results can be found here!
Metric | Value (%) |
---|---|
Average | 25.67 |
IFEval (0-Shot) | 71.37 |
BBH (3-Shot) | 30.27 |
MATH Lvl 5 (4-Shot) | 18.35 |
GPQA (0-shot) | 1.34 |
MuSR (0-shot) | 5.96 |
MMLU-PRO (5-shot) | 26.72 |
- Downloads last month
- 17
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for T145/KRONOS-8B-V1-P3
Merge model
this model
Datasets used to train T145/KRONOS-8B-V1-P3
Evaluation results
- averaged accuracy on IFEval (0-Shot)Open LLM Leaderboard71.370
- normalized accuracy on BBH (3-Shot)test set Open LLM Leaderboard30.270
- exact match on MATH Lvl 5 (4-Shot)test set Open LLM Leaderboard18.350
- acc_norm on GPQA (0-shot)Open LLM Leaderboard1.340
- acc_norm on MuSR (0-shot)Open LLM Leaderboard5.960
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard26.720