|
--- |
|
license: other |
|
--- |
|
# llama2-13b-megacode2-oasst |
|
|
|
- sampling report: [2023-08-15_andreaskoepf_llama2-13b-megacode2-oasst_sampling_noprefix2.json](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-08-15_andreaskoepf_llama2-13b-megacode2-oasst_sampling_noprefix2.json) |
|
|
|
### Prompt template |
|
|
|
[chatml](https://github.com/openai/openai-python/blob/main/chatml.md) format is used: |
|
"<|im_start|>user\n{user prompt}<|im_end|>\n<|im_start|>assistant\n{Assistant answer}<|im_end|>\n" |
|
|
|
Multi-line: |
|
|
|
``` |
|
<|im_start|>user |
|
{user prompt}<|im_end|> |
|
<|im_start|>assistant |
|
{Assistant answer}<|im_end|> |
|
``` |
|
|
|
### Credits & Special Thanks |
|
|
|
- Compute was generously sponsored by the eplf [Machine Learning and Optimization Laboratory](https://www.epfl.ch/labs/mlo/) |
|
- The open-source [epfLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) trainer was used for fine-tuning. |
|
- [rombodawg](https://huggingface.co/rombodawg) curated and published [LosslessMegaCodeTrainingV2_1m_Evol_Uncensored](https://huggingface.co/datasets/rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored) |
|
- [andreaskoepf](https://github.com/andreaskoepf/) prepared & orchestrated the training. |
|
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenAssistant__llama2-13b-megacode2-oasst) |
|
|
|
| Metric | Value | |
|
|-----------------------|---------------------------| |
|
| Avg. | 49.61 | |
|
| ARC (25-shot) | 60.67 | |
|
| HellaSwag (10-shot) | 81.93 | |
|
| MMLU (5-shot) | 57.38 | |
|
| TruthfulQA (0-shot) | 47.85 | |
|
| Winogrande (5-shot) | 76.16 | |
|
| GSM8K (5-shot) | 15.54 | |
|
| DROP (3-shot) | 7.74 | |
|
|