Built with Axolotl

c9860bba-7b91-4957-862a-65dc2f331287

This model is a fine-tuned version of unsloth/Llama-3.2-3B on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6477

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.000218
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 50
  • training_steps: 500

Training results

Training Loss Epoch Step Validation Loss
No log 0.0001 1 1.8829
1.6893 0.0067 50 1.9148
1.628 0.0134 100 1.8391
1.6748 0.0201 150 1.8245
1.7607 0.0268 200 1.7825
1.7473 0.0335 250 1.7162
1.4453 0.0402 300 1.6977
1.6703 0.0470 350 1.6720
1.5562 0.0537 400 1.6524
1.5463 0.0604 450 1.6464
1.6407 0.0671 500 1.6477

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.0
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
7
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for lesso18/c9860bba-7b91-4957-862a-65dc2f331287

Adapter
(412)
this model