Uploaded model
- Developed by: theprint
- License: apache-2.0
- Finetuned from model : unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 17.39 |
IFEval (0-Shot) | 24.09 |
BBH (3-Shot) | 28.45 |
MATH Lvl 5 (4-Shot) | 8.46 |
GPQA (0-shot) | 5.48 |
MuSR (0-shot) | 9.22 |
MMLU-PRO (5-shot) | 28.63 |
- Downloads last month
- 61
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for theprint/phi-3-mini-4k-python
Datasets used to train theprint/phi-3-mini-4k-python
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard24.090
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard28.450
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard8.460
- acc_norm on GPQA (0-shot)Open LLM Leaderboard5.480
- acc_norm on MuSR (0-shot)Open LLM Leaderboard9.220
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard28.630