Ministral-8B-Instruct-2410-PsyCourse-fold10
This model is a fine-tuned version of mistralai/Ministral-8B-Instruct-2410 on the course-train-fold1 dataset. It achieves the following results on the evaluation set:
- Loss: 0.0309
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.2583 | 0.0770 | 50 | 0.2419 |
0.085 | 0.1539 | 100 | 0.0695 |
0.0606 | 0.2309 | 150 | 0.0581 |
0.0577 | 0.3078 | 200 | 0.0537 |
0.0433 | 0.3848 | 250 | 0.0429 |
0.0404 | 0.4617 | 300 | 0.0466 |
0.0443 | 0.5387 | 350 | 0.0458 |
0.0499 | 0.6156 | 400 | 0.0443 |
0.0288 | 0.6926 | 450 | 0.0398 |
0.0305 | 0.7695 | 500 | 0.0376 |
0.0417 | 0.8465 | 550 | 0.0345 |
0.0348 | 0.9234 | 600 | 0.0352 |
0.0313 | 1.0004 | 650 | 0.0361 |
0.0346 | 1.0773 | 700 | 0.0368 |
0.0259 | 1.1543 | 750 | 0.0336 |
0.0287 | 1.2312 | 800 | 0.0335 |
0.0266 | 1.3082 | 850 | 0.0346 |
0.0203 | 1.3851 | 900 | 0.0338 |
0.0359 | 1.4621 | 950 | 0.0355 |
0.0315 | 1.5391 | 1000 | 0.0363 |
0.0322 | 1.6160 | 1050 | 0.0317 |
0.0343 | 1.6930 | 1100 | 0.0374 |
0.0223 | 1.7699 | 1150 | 0.0329 |
0.0211 | 1.8469 | 1200 | 0.0370 |
0.0273 | 1.9238 | 1250 | 0.0335 |
0.0217 | 2.0008 | 1300 | 0.0324 |
0.0164 | 2.0777 | 1350 | 0.0309 |
0.0213 | 2.1547 | 1400 | 0.0343 |
0.0099 | 2.2316 | 1450 | 0.0368 |
0.0199 | 2.3086 | 1500 | 0.0350 |
0.0153 | 2.3855 | 1550 | 0.0390 |
0.0115 | 2.4625 | 1600 | 0.0372 |
0.0189 | 2.5394 | 1650 | 0.0345 |
0.0208 | 2.6164 | 1700 | 0.0358 |
0.0222 | 2.6933 | 1750 | 0.0333 |
0.0177 | 2.7703 | 1800 | 0.0341 |
0.0196 | 2.8472 | 1850 | 0.0324 |
0.0177 | 2.9242 | 1900 | 0.0336 |
0.0251 | 3.0012 | 1950 | 0.0341 |
0.009 | 3.0781 | 2000 | 0.0371 |
0.0106 | 3.1551 | 2050 | 0.0414 |
0.008 | 3.2320 | 2100 | 0.0403 |
0.0065 | 3.3090 | 2150 | 0.0404 |
0.0099 | 3.3859 | 2200 | 0.0393 |
0.0082 | 3.4629 | 2250 | 0.0370 |
0.0111 | 3.5398 | 2300 | 0.0387 |
0.0049 | 3.6168 | 2350 | 0.0390 |
0.01 | 3.6937 | 2400 | 0.0374 |
0.007 | 3.7707 | 2450 | 0.0381 |
0.0072 | 3.8476 | 2500 | 0.0394 |
0.0082 | 3.9246 | 2550 | 0.0409 |
0.0089 | 4.0015 | 2600 | 0.0406 |
0.0025 | 4.0785 | 2650 | 0.0420 |
0.0075 | 4.1554 | 2700 | 0.0460 |
0.002 | 4.2324 | 2750 | 0.0460 |
0.0033 | 4.3093 | 2800 | 0.0468 |
0.0015 | 4.3863 | 2850 | 0.0478 |
0.0043 | 4.4633 | 2900 | 0.0485 |
0.0018 | 4.5402 | 2950 | 0.0478 |
0.0035 | 4.6172 | 3000 | 0.0477 |
0.003 | 4.6941 | 3050 | 0.0481 |
0.0037 | 4.7711 | 3100 | 0.0480 |
0.0033 | 4.8480 | 3150 | 0.0479 |
0.0037 | 4.9250 | 3200 | 0.0481 |
Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
- Downloads last month
- 7
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model’s pipeline type.
Model tree for chchen/Ministral-8B-Instruct-2410-PsyCourse-fold10
Base model
mistralai/Ministral-8B-Instruct-2410