videomae-base-finetuned-kisa

This model is a fine-tuned version of MCG-NJU/videomae-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 4.2647
  • Accuracy: 0.4913

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.05
  • training_steps: 2725

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.0741 0.0404 110 1.1353 0.5
0.0259 1.0404 220 3.7142 0.1183
0.5784 2.0404 330 2.2692 0.5
0.1384 3.0404 440 1.3726 0.5178
0.513 4.0404 550 2.5340 0.3728
0.0156 5.0404 660 2.3487 0.2041
0.0033 6.0404 770 4.4601 0.1953
0.0071 7.0404 880 4.6045 0.0917
0.004 8.0404 990 3.4062 0.4083
0.0017 9.0404 1100 2.4961 0.4941
0.4934 10.0404 1210 2.9785 0.4941
0.43 11.0404 1320 3.7030 0.5207
0.0014 12.0404 1430 2.5479 0.2012
0.0021 13.0404 1540 4.0235 0.3195
0.2387 14.0404 1650 4.6049 0.2337
0.0009 15.0404 1760 4.3070 0.2485
0.0004 16.0404 1870 4.4573 0.2515
0.5939 17.0404 1980 4.3423 0.3550
0.0013 18.0404 2090 4.3365 0.3047
0.0015 19.0404 2200 4.0964 0.2426
0.0032 20.0404 2310 4.1795 0.2988
0.0006 21.0404 2420 4.1612 0.3136

Framework versions

  • Transformers 4.48.1
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
6
Safetensors
Model size
86.2M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for d2o2ji/videomae-base-finetuned-kisa

Finetuned
(479)
this model