This model is a fine-tuned version of google/vit-base-patch16-224 on the imagefolder dataset.
Training procedure Training hyperparameters The following hyperparameters were used during training:
learning_rate: 5e-05 train_batch_size: 32 eval_batch_size: 32 seed: 42 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: linear num_epochs: 5 Training results Framework versions Transformers 4.24.0 Pytorch 1.13.1+cu116 Datasets 2.7.1 Tokenizers 0.13.2
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.
Model tree for Malini/Flowers
Base model
google/vit-base-patch16-224