vit-base-patch16-224-in21k-v2024-11-07

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1875
  • Accuracy: 0.9449
  • F1: 0.8664
  • Precision: 0.8559
  • Recall: 0.8772

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.00025
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision Recall
0.0808 1.1905 100 0.1574 0.9408 0.8531 0.8614 0.8450
0.0908 2.3810 200 0.1861 0.9318 0.8327 0.8321 0.8333
0.1393 3.5714 300 0.2000 0.9298 0.8297 0.8191 0.8406
0.0911 4.7619 400 0.1639 0.9360 0.8448 0.8345 0.8553
0.095 5.9524 500 0.1779 0.9393 0.8507 0.8519 0.8494
0.0767 7.1429 600 0.1691 0.9411 0.8563 0.8501 0.8626
0.0918 8.3333 700 0.1709 0.9375 0.8476 0.8415 0.8538
0.0742 9.5238 800 0.1703 0.9378 0.8471 0.8477 0.8465
0.0931 10.7143 900 0.1779 0.9351 0.8388 0.8488 0.8289
0.085 11.9048 1000 0.1835 0.9351 0.8427 0.8319 0.8538
0.0712 13.0952 1100 0.1886 0.9339 0.8377 0.8377 0.8377
0.0616 14.2857 1200 0.1863 0.9351 0.8429 0.8310 0.8553
0.0628 15.4762 1300 0.1815 0.9387 0.8499 0.8474 0.8523
0.0571 16.6667 1400 0.1749 0.9449 0.8685 0.8451 0.8933
0.0496 17.8571 1500 0.1781 0.9384 0.8484 0.8502 0.8465
0.0484 19.0476 1600 0.1859 0.9354 0.8406 0.8449 0.8363
0.0487 20.2381 1700 0.1697 0.9446 0.8642 0.8630 0.8655
0.0485 21.4286 1800 0.1876 0.9369 0.8470 0.8362 0.8582
0.042 22.6190 1900 0.1835 0.9414 0.8576 0.8484 0.8670
0.0367 23.8095 2000 0.1844 0.9432 0.8613 0.8557 0.8670
0.0339 25.0 2100 0.1816 0.9411 0.8578 0.8432 0.8728
0.0317 26.1905 2200 0.1817 0.9423 0.8602 0.8480 0.8728
0.0349 27.3810 2300 0.1799 0.9426 0.8592 0.8574 0.8611
0.0355 28.5714 2400 0.1932 0.9402 0.8540 0.8485 0.8596
0.0296 29.7619 2500 0.1875 0.9449 0.8664 0.8559 0.8772

Framework versions

  • Transformers 4.48.2
  • Pytorch 2.5.1+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
17
Safetensors
Model size
86.2M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for liamxostrander/vit-base-patch16-224-in21k-v2024-11-07

Finetuned
(1889)
this model