--- license: llama3 library_name: peft tags: - alignment-handbook - trl - sft - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B datasets: - yihanwang617/ultrachat_200k_processed_indicator_0.6_4k model-index: - name: llama-3-qlora-ultrachat-200k-processed-indicator-0.6 results: [] --- # llama-3-qlora-ultrachat-200k-processed-indicator-0.6 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the yihanwang617/ultrachat_200k_processed_indicator_0.6_4k dataset. It achieves the following results on the evaluation set: - Loss: 1.0200 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.0614 | 0.0616 | 200 | 1.0632 | | 1.0689 | 0.1232 | 400 | 1.0476 | | 1.0053 | 0.1847 | 600 | 1.0413 | | 1.0446 | 0.2463 | 800 | 1.0366 | | 1.0091 | 0.3079 | 1000 | 1.0336 | | 1.0093 | 0.3695 | 1200 | 1.0310 | | 1.0086 | 0.4311 | 1400 | 1.0291 | | 1.0362 | 0.4926 | 1600 | 1.0270 | | 1.0155 | 0.5542 | 1800 | 1.0256 | | 1.0138 | 0.6158 | 2000 | 1.0240 | | 1.0392 | 0.6774 | 2200 | 1.0226 | | 1.0079 | 0.7389 | 2400 | 1.0216 | | 1.0139 | 0.8005 | 2600 | 1.0208 | | 0.9857 | 0.8621 | 2800 | 1.0204 | | 1.0258 | 0.9237 | 3000 | 1.0201 | | 1.0147 | 0.9853 | 3200 | 1.0200 | ### Framework versions - PEFT 0.12.0 - Transformers 4.40.1 - Pytorch 2.4.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1