|
--- |
|
license: apache-2.0 |
|
library_name: peft |
|
tags: |
|
- generated_from_trainer |
|
base_model: meta-llama/Meta-Llama-3-8B |
|
model-index: |
|
- name: out |
|
results: [] |
|
datasets: |
|
- jondurbin/airoboros-3.2 |
|
--- |
|
|
|
![](https://raw.githubusercontent.com/saucam/models/main/llama-aero.png) |
|
|
|
# llama-airo-3 |
|
|
|
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
|
|
|
## Details |
|
|
|
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the jondurbin/airoboros-3.2 dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.8437 |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 0.0002 |
|
- train_batch_size: 2 |
|
- eval_batch_size: 2 |
|
- seed: 42 |
|
- gradient_accumulation_steps: 4 |
|
- total_train_batch_size: 8 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: cosine |
|
- lr_scheduler_warmup_steps: 10 |
|
- num_epochs: 1 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | |
|
|:-------------:|:-----:|:----:|:---------------:| |
|
| 1.1845 | 0.0 | 1 | 1.1821 | |
|
| 0.9328 | 0.25 | 114 | 0.9228 | |
|
| 0.8961 | 0.5 | 228 | 0.8713 | |
|
| 0.824 | 0.75 | 342 | 0.8437 | |
|
|
|
|
|
### Framework versions |
|
|
|
- PEFT 0.10.0 |
|
- Transformers 4.40.0.dev0 |
|
- Pytorch 2.1.2+cu118 |
|
- Datasets 2.15.0 |
|
- Tokenizers 0.15.0 |
|
|
|
## Eval Results |
|
|
|
|Benchmark| Model |agieval|gpt4all|bigbench|truthfulqa|Average| |
|
|---------|----------------------------------------------------------|------:|------:|-------:|---------:|------:| |
|
|nous |[llama-airo-3](https://huggingface.co/saucam/llama-airo-3)| 36.59| 72.24| 39.26| 56.3| 51.1| |
|
|nous|[meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)|31.1|69.95|36.7|43.91|45.42| |
|
|
|
|Benchmark| Model |winogrande| arc |gsm8k|mmlu |truthfulqa|hellaswag|Average| |
|
|---------|----------------------------------------------------------|---------:|----:|----:|----:|---------:|--------:|------:| |
|
|openllm |[llama-airo-3](https://huggingface.co/saucam/llama-airo-3)| 78.22|61.01|56.33|64.79| 56.35| 82.42| 66.52| |
|
|openllm |[Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)| 77.58|57.51|50.87|65.04| 43.93| 82.09| 62.84| |
|
|
|
Detailed Results: https://github.com/saucam/model_evals/tree/main/saucam/llama-airo-3 |