metadata
library_name: transformers
license: mit
base_model: microsoft/deberta-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: CS221-deberta-base-finetuned-semeval-aug
results: []
CS221-deberta-base-finetuned-semeval-aug
This model is a fine-tuned version of microsoft/deberta-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.3475
- F1: 0.8781
- Roc Auc: 0.9070
- Accuracy: 0.7651
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
---|---|---|---|---|---|---|
0.4197 | 1.0 | 277 | 0.3731 | 0.6624 | 0.7504 | 0.4435 |
0.2694 | 2.0 | 554 | 0.3172 | 0.7734 | 0.8309 | 0.5411 |
0.1796 | 3.0 | 831 | 0.2815 | 0.7769 | 0.8256 | 0.5890 |
0.1281 | 4.0 | 1108 | 0.2802 | 0.8120 | 0.8543 | 0.6305 |
0.0754 | 5.0 | 1385 | 0.2998 | 0.8177 | 0.8565 | 0.6495 |
0.067 | 6.0 | 1662 | 0.2926 | 0.8367 | 0.8755 | 0.6838 |
0.0303 | 7.0 | 1939 | 0.2977 | 0.8409 | 0.8750 | 0.7010 |
0.009 | 8.0 | 2216 | 0.3252 | 0.8474 | 0.8777 | 0.7091 |
0.0114 | 9.0 | 2493 | 0.3181 | 0.8539 | 0.8899 | 0.7281 |
0.006 | 10.0 | 2770 | 0.3390 | 0.8581 | 0.8890 | 0.7344 |
0.0023 | 11.0 | 3047 | 0.3407 | 0.8646 | 0.8934 | 0.7353 |
0.0022 | 12.0 | 3324 | 0.3453 | 0.8674 | 0.8991 | 0.7525 |
0.0031 | 13.0 | 3601 | 0.3488 | 0.8708 | 0.9021 | 0.7507 |
0.0013 | 14.0 | 3878 | 0.3440 | 0.8736 | 0.9044 | 0.7579 |
0.0009 | 15.0 | 4155 | 0.3475 | 0.8781 | 0.9070 | 0.7651 |
0.0026 | 16.0 | 4432 | 0.3455 | 0.8767 | 0.9057 | 0.7651 |
0.0008 | 17.0 | 4709 | 0.3504 | 0.8755 | 0.9053 | 0.7615 |
0.0009 | 18.0 | 4986 | 0.3549 | 0.8742 | 0.9043 | 0.7588 |
Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0