qlora-out
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.2423
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.5123 | 0.01 | 1 | 1.5038 |
1.3662 | 0.06 | 5 | 1.4103 |
1.1836 | 0.11 | 10 | 1.3055 |
1.2761 | 0.17 | 15 | 1.2810 |
1.1779 | 0.22 | 20 | 1.2696 |
1.1242 | 0.28 | 25 | 1.2642 |
1.2414 | 0.33 | 30 | 1.2588 |
1.1382 | 0.39 | 35 | 1.2555 |
1.2094 | 0.45 | 40 | 1.2520 |
1.1049 | 0.5 | 45 | 1.2504 |
1.1709 | 0.56 | 50 | 1.2487 |
1.0981 | 0.61 | 55 | 1.2463 |
1.1902 | 0.67 | 60 | 1.2446 |
1.1526 | 0.72 | 65 | 1.2446 |
1.1319 | 0.78 | 70 | 1.2440 |
1.1913 | 0.84 | 75 | 1.2430 |
1.1875 | 0.89 | 80 | 1.2424 |
1.1454 | 0.95 | 85 | 1.2423 |
Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
- Downloads last month
- 5
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for tomjennings100/absumm
Base model
mistralai/Mistral-7B-v0.1