|
--- |
|
base_model: roberta-large |
|
library_name: peft |
|
license: mit |
|
tags: |
|
- generated_from_trainer |
|
model-index: |
|
- name: ft-roberta-large-on-bionlp2004-lora |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# ft-roberta-large-on-bionlp2004-lora |
|
|
|
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the cdcvd/ejpfepj dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 1.0401 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 0.001 |
|
- train_batch_size: 16 |
|
- eval_batch_size: 16 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 50 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | |
|
|:-------------:|:-----:|:-----:|:---------------:| |
|
| No log | 1.0 | 282 | 0.2349 | |
|
| 0.3151 | 2.0 | 564 | 0.2351 | |
|
| 0.3151 | 3.0 | 846 | 1.0534 | |
|
| 0.7392 | 4.0 | 1128 | 1.0375 | |
|
| 0.7392 | 5.0 | 1410 | 1.0487 | |
|
| 1.0405 | 6.0 | 1692 | 1.0723 | |
|
| 1.0405 | 7.0 | 1974 | 1.0318 | |
|
| 1.0417 | 8.0 | 2256 | 1.0493 | |
|
| 1.0345 | 9.0 | 2538 | 1.0539 | |
|
| 1.0345 | 10.0 | 2820 | 1.0324 | |
|
| 1.0339 | 11.0 | 3102 | 1.0663 | |
|
| 1.0339 | 12.0 | 3384 | 1.0691 | |
|
| 1.0303 | 13.0 | 3666 | 1.0563 | |
|
| 1.0303 | 14.0 | 3948 | 1.0330 | |
|
| 1.0369 | 15.0 | 4230 | 1.0519 | |
|
| 1.0312 | 16.0 | 4512 | 1.0440 | |
|
| 1.0312 | 17.0 | 4794 | 1.0440 | |
|
| 1.0321 | 18.0 | 5076 | 1.0376 | |
|
| 1.0321 | 19.0 | 5358 | 1.0358 | |
|
| 1.0246 | 20.0 | 5640 | 1.0331 | |
|
| 1.0246 | 21.0 | 5922 | 1.0538 | |
|
| 1.0343 | 22.0 | 6204 | 1.0440 | |
|
| 1.0343 | 23.0 | 6486 | 1.0444 | |
|
| 1.0273 | 24.0 | 6768 | 1.0497 | |
|
| 1.0277 | 25.0 | 7050 | 1.0471 | |
|
| 1.0277 | 26.0 | 7332 | 1.0393 | |
|
| 1.0216 | 27.0 | 7614 | 1.0835 | |
|
| 1.0216 | 28.0 | 7896 | 1.0508 | |
|
| 1.0312 | 29.0 | 8178 | 1.0246 | |
|
| 1.0312 | 30.0 | 8460 | 1.0448 | |
|
| 1.0297 | 31.0 | 8742 | 1.0344 | |
|
| 1.0288 | 32.0 | 9024 | 1.0446 | |
|
| 1.0288 | 33.0 | 9306 | 1.0415 | |
|
| 1.0252 | 34.0 | 9588 | 1.0460 | |
|
| 1.0252 | 35.0 | 9870 | 1.0295 | |
|
| 1.0274 | 36.0 | 10152 | 1.0508 | |
|
| 1.0274 | 37.0 | 10434 | 1.0470 | |
|
| 1.0263 | 38.0 | 10716 | 1.0345 | |
|
| 1.0263 | 39.0 | 10998 | 1.0322 | |
|
| 1.0275 | 40.0 | 11280 | 1.0398 | |
|
| 1.0263 | 41.0 | 11562 | 1.0496 | |
|
| 1.0263 | 42.0 | 11844 | 1.0449 | |
|
| 1.0248 | 43.0 | 12126 | 1.0404 | |
|
| 1.0248 | 44.0 | 12408 | 1.0387 | |
|
| 1.025 | 45.0 | 12690 | 1.0455 | |
|
| 1.025 | 46.0 | 12972 | 1.0415 | |
|
| 1.0222 | 47.0 | 13254 | 1.0497 | |
|
| 1.0233 | 48.0 | 13536 | 1.0362 | |
|
| 1.0233 | 49.0 | 13818 | 1.0392 | |
|
| 1.0273 | 50.0 | 14100 | 1.0401 | |
|
|
|
|
|
### Framework versions |
|
|
|
- PEFT 0.7.1 |
|
- Transformers 4.36.2 |
|
- Pytorch 2.3.1+cu121 |
|
- Datasets 2.15.0 |
|
- Tokenizers 0.15.2 |