MalyO2/detr_finetune_aug_no_scheduler
Browse files
README.md
ADDED
@@ -0,0 +1,86 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
license: apache-2.0
|
4 |
+
base_model: facebook/detr-resnet-50-dc5
|
5 |
+
tags:
|
6 |
+
- generated_from_trainer
|
7 |
+
model-index:
|
8 |
+
- name: facebook/detr-resnet-50-dc5
|
9 |
+
results: []
|
10 |
+
---
|
11 |
+
|
12 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
13 |
+
should probably proofread and complete it, then remove this comment. -->
|
14 |
+
|
15 |
+
# facebook/detr-resnet-50-dc5
|
16 |
+
|
17 |
+
This model is a fine-tuned version of [facebook/detr-resnet-50-dc5](https://huggingface.co/facebook/detr-resnet-50-dc5) on the None dataset.
|
18 |
+
It achieves the following results on the evaluation set:
|
19 |
+
- Loss: 0.7887
|
20 |
+
- Map: 0.55
|
21 |
+
- Map 50: 0.6825
|
22 |
+
- Map 75: 0.5932
|
23 |
+
- Map Small: 0.0
|
24 |
+
- Map Medium: 0.5352
|
25 |
+
- Map Large: 0.7531
|
26 |
+
- Mar 1: 0.1882
|
27 |
+
- Mar 10: 0.6735
|
28 |
+
- Mar 100: 0.7588
|
29 |
+
- Mar Small: 0.0
|
30 |
+
- Mar Medium: 0.7158
|
31 |
+
- Mar Large: 0.9385
|
32 |
+
- Map Object: -1.0
|
33 |
+
- Mar 100 Object: -1.0
|
34 |
+
- Map Balloon: 0.55
|
35 |
+
- Mar 100 Balloon: 0.7588
|
36 |
+
|
37 |
+
## Model description
|
38 |
+
|
39 |
+
More information needed
|
40 |
+
|
41 |
+
## Intended uses & limitations
|
42 |
+
|
43 |
+
More information needed
|
44 |
+
|
45 |
+
## Training and evaluation data
|
46 |
+
|
47 |
+
More information needed
|
48 |
+
|
49 |
+
## Training procedure
|
50 |
+
|
51 |
+
### Training hyperparameters
|
52 |
+
|
53 |
+
The following hyperparameters were used during training:
|
54 |
+
- learning_rate: 3e-05
|
55 |
+
- train_batch_size: 4
|
56 |
+
- eval_batch_size: 4
|
57 |
+
- seed: 42
|
58 |
+
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
59 |
+
- lr_scheduler_type: linear
|
60 |
+
- training_steps: 125
|
61 |
+
- mixed_precision_training: Native AMP
|
62 |
+
|
63 |
+
### Training results
|
64 |
+
|
65 |
+
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Object | Mar 100 Object | Map Balloon | Mar 100 Balloon |
|
66 |
+
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:----------:|:--------------:|:-----------:|:---------------:|
|
67 |
+
| 2.1236 | 0.7692 | 10 | 1.3396 | 0.0768 | 0.1002 | 0.0897 | 0.0 | 0.0966 | 0.1387 | 0.0765 | 0.3735 | 0.5647 | 0.0 | 0.3789 | 0.9231 | -1.0 | -1.0 | 0.0768 | 0.5647 |
|
68 |
+
| 1.5088 | 1.5385 | 20 | 1.2730 | 0.1472 | 0.1875 | 0.1691 | 0.0 | 0.1297 | 0.2723 | 0.1059 | 0.3647 | 0.6618 | 0.0 | 0.5684 | 0.9 | -1.0 | -1.0 | 0.1472 | 0.6618 |
|
69 |
+
| 1.3182 | 2.3077 | 30 | 1.2273 | 0.1816 | 0.2322 | 0.1918 | 0.0 | 0.2368 | 0.3423 | 0.1088 | 0.3941 | 0.6647 | 0.0 | 0.6053 | 0.8538 | -1.0 | -1.0 | 0.1816 | 0.6647 |
|
70 |
+
| 1.365 | 3.0769 | 40 | 1.0452 | 0.2476 | 0.3019 | 0.2823 | 0.0 | 0.3035 | 0.4146 | 0.1118 | 0.4882 | 0.7559 | 0.0 | 0.7158 | 0.9308 | -1.0 | -1.0 | 0.2476 | 0.7559 |
|
71 |
+
| 1.2013 | 3.8462 | 50 | 0.9825 | 0.3006 | 0.3891 | 0.3233 | 0.0 | 0.3747 | 0.496 | 0.1324 | 0.5265 | 0.7324 | 0.0 | 0.6737 | 0.9308 | -1.0 | -1.0 | 0.3006 | 0.7324 |
|
72 |
+
| 1.3605 | 4.6154 | 60 | 0.9307 | 0.3655 | 0.4809 | 0.4024 | 0.0 | 0.3706 | 0.5922 | 0.1324 | 0.5471 | 0.7294 | 0.0 | 0.6684 | 0.9308 | -1.0 | -1.0 | 0.3655 | 0.7294 |
|
73 |
+
| 1.0117 | 5.3846 | 70 | 0.8867 | 0.3834 | 0.5044 | 0.4222 | 0.0 | 0.4086 | 0.5963 | 0.1294 | 0.5882 | 0.7324 | 0.0 | 0.6737 | 0.9308 | -1.0 | -1.0 | 0.3834 | 0.7324 |
|
74 |
+
| 1.1224 | 6.1538 | 80 | 0.8413 | 0.478 | 0.6138 | 0.5427 | 0.0 | 0.472 | 0.7053 | 0.1676 | 0.6265 | 0.7529 | 0.0 | 0.7053 | 0.9385 | -1.0 | -1.0 | 0.478 | 0.7529 |
|
75 |
+
| 1.0109 | 6.9231 | 90 | 0.8210 | 0.5281 | 0.6515 | 0.5817 | 0.0 | 0.5391 | 0.7497 | 0.1559 | 0.6441 | 0.7735 | 0.0 | 0.7316 | 0.9538 | -1.0 | -1.0 | 0.5281 | 0.7735 |
|
76 |
+
| 1.0771 | 7.6923 | 100 | 0.8153 | 0.5506 | 0.6859 | 0.604 | 0.0 | 0.5638 | 0.7373 | 0.1794 | 0.6618 | 0.7676 | 0.0 | 0.7263 | 0.9462 | -1.0 | -1.0 | 0.5506 | 0.7676 |
|
77 |
+
| 0.9122 | 8.4615 | 110 | 0.7948 | 0.5551 | 0.6839 | 0.6097 | 0.0 | 0.5603 | 0.7503 | 0.1853 | 0.6618 | 0.7824 | 0.0 | 0.7526 | 0.9462 | -1.0 | -1.0 | 0.5551 | 0.7824 |
|
78 |
+
| 0.9918 | 9.2308 | 120 | 0.7887 | 0.55 | 0.6825 | 0.5932 | 0.0 | 0.5352 | 0.7531 | 0.1882 | 0.6735 | 0.7588 | 0.0 | 0.7158 | 0.9385 | -1.0 | -1.0 | 0.55 | 0.7588 |
|
79 |
+
|
80 |
+
|
81 |
+
### Framework versions
|
82 |
+
|
83 |
+
- Transformers 4.46.3
|
84 |
+
- Pytorch 2.4.0
|
85 |
+
- Datasets 3.1.0
|
86 |
+
- Tokenizers 0.20.0
|
wandb/debug-internal.log
CHANGED
@@ -16,3 +16,5 @@
|
|
16 |
{"time":"2024-11-27T18:03:24.750053009Z","level":"INFO","msg":"Resuming system monitor"}
|
17 |
{"time":"2024-11-27T18:03:25.209999197Z","level":"INFO","msg":"Pausing system monitor"}
|
18 |
{"time":"2024-11-27T18:03:29.733689435Z","level":"INFO","msg":"Resuming system monitor"}
|
|
|
|
|
|
16 |
{"time":"2024-11-27T18:03:24.750053009Z","level":"INFO","msg":"Resuming system monitor"}
|
17 |
{"time":"2024-11-27T18:03:25.209999197Z","level":"INFO","msg":"Pausing system monitor"}
|
18 |
{"time":"2024-11-27T18:03:29.733689435Z","level":"INFO","msg":"Resuming system monitor"}
|
19 |
+
{"time":"2024-11-27T18:07:29.761758411Z","level":"INFO","msg":"Pausing system monitor"}
|
20 |
+
{"time":"2024-11-27T18:07:47.561005995Z","level":"INFO","msg":"Resuming system monitor"}
|
wandb/debug.log
CHANGED
@@ -51,3 +51,6 @@ config: {'batch_size': 4, 'learning_rate': 0.0003, 'num_epochs': 10}
|
|
51 |
2024-11-27 18:03:30,121 INFO MainThread:1290 [wandb_run.py:_config_callback():1387] config_cb None None {'use_timm_backbone': True, 'backbone_config': None, 'num_channels': 3, 'num_queries': 100, 'd_model': 256, 'encoder_ffn_dim': 2048, 'encoder_layers': 6, 'encoder_attention_heads': 8, 'decoder_ffn_dim': 2048, 'decoder_layers': 6, 'decoder_attention_heads': 8, 'dropout': 0.1, 'attention_dropout': 0.0, 'activation_dropout': 0.0, 'activation_function': 'relu', 'init_std': 0.02, 'init_xavier_std': 1.0, 'encoder_layerdrop': 0.0, 'decoder_layerdrop': 0.0, 'num_hidden_layers': 6, 'auxiliary_loss': False, 'position_embedding_type': 'sine', 'backbone': 'resnet50', 'use_pretrained_backbone': True, 'backbone_kwargs': {'output_stride': 16, 'out_indices': [1, 2, 3, 4], 'in_chans': 3}, 'dilation': True, 'class_cost': 1, 'bbox_cost': 5, 'giou_cost': 2, 'mask_loss_coefficient': 1, 'dice_loss_coefficient': 1, 'bbox_loss_coefficient': 5, 'giou_loss_coefficient': 2, 'eos_coefficient': 0.1, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': None, 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': True, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['DetrForObjectDetection'], 'finetuning_task': None, 'id2label': {0: 'object', 1: 'balloon'}, 'label2id': {'object': 0, 'balloon': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': None, 'eos_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'facebook/detr-resnet-50-dc5', '_attn_implementation_autoset': True, 'transformers_version': '4.46.3', 'classifier_dropout': 0.0, 'max_position_embeddings': 1024, 'model_type': 'detr', 'scale_embedding': False, 'output_dir': '.', 'overwrite_output_dir': False, 'do_train': False, 'do_eval': True, 'do_predict': False, 'eval_strategy': 'steps', 'prediction_loss_only': False, 'per_device_train_batch_size': 4, 'per_device_eval_batch_size': 4, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 1, 'eval_accumulation_steps': None, 'eval_delay': 0, 'torch_empty_cache_steps': None, 'learning_rate': 3e-05, 'weight_decay': 0.0001, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 1.0, 'num_train_epochs': 3.0, 'max_steps': 125, 'lr_scheduler_type': 'linear', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.0, 'warmup_steps': 0, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': './runs/Nov27_18-03-24_f5b68522d064', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 5, 'logging_nan_inf_filter': True, 'save_strategy': 'steps', 'save_steps': 10, 'save_total_limit': 2, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'restore_callback_states_from_checkpoint': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': False, 'fp16': True, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': None, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': 10, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': '.', 'disable_tqdm': False, 'remove_unused_columns': False, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'adamw_torch', 'optim_args': None, 'adafactor': False, 'group_by_length': False, 'length_column_name': 'length', 'report_to': ['wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': True, 'resume_from_checkpoint': None, 'hub_model_id': None, 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': False, 'hub_always_push': False, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'include_for_metrics': [], 'eval_do_concat_batches': True, 'fp16_backend': 'auto', 'evaluation_strategy': 'steps', 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None, 'optim_target_modules': None, 'batch_eval_metrics': True, 'eval_on_start': False, 'use_liger_kernel': False, 'eval_use_gather_object': False, 'average_tokens_across_devices': False}
|
52 |
2024-11-27 18:03:30,125 INFO MainThread:1290 [wandb_config.py:__setitem__():154] config set model/num_parameters = 41501895 - <bound method Run._config_callback of <wandb.sdk.wandb_run.Run object at 0x7ea6083ff9a0>>
|
53 |
2024-11-27 18:03:30,125 INFO MainThread:1290 [wandb_run.py:_config_callback():1387] config_cb model/num_parameters 41501895 None
|
|
|
|
|
|
|
|
51 |
2024-11-27 18:03:30,121 INFO MainThread:1290 [wandb_run.py:_config_callback():1387] config_cb None None {'use_timm_backbone': True, 'backbone_config': None, 'num_channels': 3, 'num_queries': 100, 'd_model': 256, 'encoder_ffn_dim': 2048, 'encoder_layers': 6, 'encoder_attention_heads': 8, 'decoder_ffn_dim': 2048, 'decoder_layers': 6, 'decoder_attention_heads': 8, 'dropout': 0.1, 'attention_dropout': 0.0, 'activation_dropout': 0.0, 'activation_function': 'relu', 'init_std': 0.02, 'init_xavier_std': 1.0, 'encoder_layerdrop': 0.0, 'decoder_layerdrop': 0.0, 'num_hidden_layers': 6, 'auxiliary_loss': False, 'position_embedding_type': 'sine', 'backbone': 'resnet50', 'use_pretrained_backbone': True, 'backbone_kwargs': {'output_stride': 16, 'out_indices': [1, 2, 3, 4], 'in_chans': 3}, 'dilation': True, 'class_cost': 1, 'bbox_cost': 5, 'giou_cost': 2, 'mask_loss_coefficient': 1, 'dice_loss_coefficient': 1, 'bbox_loss_coefficient': 5, 'giou_loss_coefficient': 2, 'eos_coefficient': 0.1, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': None, 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': True, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['DetrForObjectDetection'], 'finetuning_task': None, 'id2label': {0: 'object', 1: 'balloon'}, 'label2id': {'object': 0, 'balloon': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': None, 'eos_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'facebook/detr-resnet-50-dc5', '_attn_implementation_autoset': True, 'transformers_version': '4.46.3', 'classifier_dropout': 0.0, 'max_position_embeddings': 1024, 'model_type': 'detr', 'scale_embedding': False, 'output_dir': '.', 'overwrite_output_dir': False, 'do_train': False, 'do_eval': True, 'do_predict': False, 'eval_strategy': 'steps', 'prediction_loss_only': False, 'per_device_train_batch_size': 4, 'per_device_eval_batch_size': 4, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 1, 'eval_accumulation_steps': None, 'eval_delay': 0, 'torch_empty_cache_steps': None, 'learning_rate': 3e-05, 'weight_decay': 0.0001, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 1.0, 'num_train_epochs': 3.0, 'max_steps': 125, 'lr_scheduler_type': 'linear', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.0, 'warmup_steps': 0, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': './runs/Nov27_18-03-24_f5b68522d064', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 5, 'logging_nan_inf_filter': True, 'save_strategy': 'steps', 'save_steps': 10, 'save_total_limit': 2, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'restore_callback_states_from_checkpoint': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': False, 'fp16': True, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': None, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': 10, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': '.', 'disable_tqdm': False, 'remove_unused_columns': False, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'adamw_torch', 'optim_args': None, 'adafactor': False, 'group_by_length': False, 'length_column_name': 'length', 'report_to': ['wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': True, 'resume_from_checkpoint': None, 'hub_model_id': None, 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': False, 'hub_always_push': False, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'include_for_metrics': [], 'eval_do_concat_batches': True, 'fp16_backend': 'auto', 'evaluation_strategy': 'steps', 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None, 'optim_target_modules': None, 'batch_eval_metrics': True, 'eval_on_start': False, 'use_liger_kernel': False, 'eval_use_gather_object': False, 'average_tokens_across_devices': False}
|
52 |
2024-11-27 18:03:30,125 INFO MainThread:1290 [wandb_config.py:__setitem__():154] config set model/num_parameters = 41501895 - <bound method Run._config_callback of <wandb.sdk.wandb_run.Run object at 0x7ea6083ff9a0>>
|
53 |
2024-11-27 18:03:30,125 INFO MainThread:1290 [wandb_run.py:_config_callback():1387] config_cb model/num_parameters 41501895 None
|
54 |
+
2024-11-27 18:07:29,761 INFO MainThread:1290 [jupyter.py:save_ipynb():387] not saving jupyter notebook
|
55 |
+
2024-11-27 18:07:29,761 INFO MainThread:1290 [wandb_init.py:_pause_backend():444] pausing backend
|
56 |
+
2024-11-27 18:07:47,559 INFO MainThread:1290 [wandb_init.py:_resume_backend():449] resuming backend
|
wandb/run-20241127_180245-wpdp0i8s/files/output.log
CHANGED
@@ -8,3 +8,7 @@
|
|
8 |
self.scaler = torch.cuda.amp.GradScaler(**kwargs)
|
9 |
max_steps is given, it will override any value given in num_train_epochs
|
10 |
[34m[1mwandb[0m: [33mWARNING[0m The `run_name` is currently set to the same value as `TrainingArguments.output_dir`. If this was not intended, please specify a different run name by setting the `TrainingArguments.run_name` parameter.
|
|
|
|
|
|
|
|
|
|
8 |
self.scaler = torch.cuda.amp.GradScaler(**kwargs)
|
9 |
max_steps is given, it will override any value given in num_train_epochs
|
10 |
[34m[1mwandb[0m: [33mWARNING[0m The `run_name` is currently set to the same value as `TrainingArguments.output_dir`. If this was not intended, please specify a different run name by setting the `TrainingArguments.run_name` parameter.
|
11 |
+
The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well.
|
12 |
+
Token is valid (permission: write).
|
13 |
+
Your token has been saved to /root/.cache/huggingface/token
|
14 |
+
Login successful
|
wandb/run-20241127_180245-wpdp0i8s/logs/debug-internal.log
CHANGED
@@ -16,3 +16,5 @@
|
|
16 |
{"time":"2024-11-27T18:03:24.750053009Z","level":"INFO","msg":"Resuming system monitor"}
|
17 |
{"time":"2024-11-27T18:03:25.209999197Z","level":"INFO","msg":"Pausing system monitor"}
|
18 |
{"time":"2024-11-27T18:03:29.733689435Z","level":"INFO","msg":"Resuming system monitor"}
|
|
|
|
|
|
16 |
{"time":"2024-11-27T18:03:24.750053009Z","level":"INFO","msg":"Resuming system monitor"}
|
17 |
{"time":"2024-11-27T18:03:25.209999197Z","level":"INFO","msg":"Pausing system monitor"}
|
18 |
{"time":"2024-11-27T18:03:29.733689435Z","level":"INFO","msg":"Resuming system monitor"}
|
19 |
+
{"time":"2024-11-27T18:07:29.761758411Z","level":"INFO","msg":"Pausing system monitor"}
|
20 |
+
{"time":"2024-11-27T18:07:47.561005995Z","level":"INFO","msg":"Resuming system monitor"}
|
wandb/run-20241127_180245-wpdp0i8s/logs/debug.log
CHANGED
@@ -51,3 +51,6 @@ config: {'batch_size': 4, 'learning_rate': 0.0003, 'num_epochs': 10}
|
|
51 |
2024-11-27 18:03:30,121 INFO MainThread:1290 [wandb_run.py:_config_callback():1387] config_cb None None {'use_timm_backbone': True, 'backbone_config': None, 'num_channels': 3, 'num_queries': 100, 'd_model': 256, 'encoder_ffn_dim': 2048, 'encoder_layers': 6, 'encoder_attention_heads': 8, 'decoder_ffn_dim': 2048, 'decoder_layers': 6, 'decoder_attention_heads': 8, 'dropout': 0.1, 'attention_dropout': 0.0, 'activation_dropout': 0.0, 'activation_function': 'relu', 'init_std': 0.02, 'init_xavier_std': 1.0, 'encoder_layerdrop': 0.0, 'decoder_layerdrop': 0.0, 'num_hidden_layers': 6, 'auxiliary_loss': False, 'position_embedding_type': 'sine', 'backbone': 'resnet50', 'use_pretrained_backbone': True, 'backbone_kwargs': {'output_stride': 16, 'out_indices': [1, 2, 3, 4], 'in_chans': 3}, 'dilation': True, 'class_cost': 1, 'bbox_cost': 5, 'giou_cost': 2, 'mask_loss_coefficient': 1, 'dice_loss_coefficient': 1, 'bbox_loss_coefficient': 5, 'giou_loss_coefficient': 2, 'eos_coefficient': 0.1, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': None, 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': True, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['DetrForObjectDetection'], 'finetuning_task': None, 'id2label': {0: 'object', 1: 'balloon'}, 'label2id': {'object': 0, 'balloon': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': None, 'eos_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'facebook/detr-resnet-50-dc5', '_attn_implementation_autoset': True, 'transformers_version': '4.46.3', 'classifier_dropout': 0.0, 'max_position_embeddings': 1024, 'model_type': 'detr', 'scale_embedding': False, 'output_dir': '.', 'overwrite_output_dir': False, 'do_train': False, 'do_eval': True, 'do_predict': False, 'eval_strategy': 'steps', 'prediction_loss_only': False, 'per_device_train_batch_size': 4, 'per_device_eval_batch_size': 4, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 1, 'eval_accumulation_steps': None, 'eval_delay': 0, 'torch_empty_cache_steps': None, 'learning_rate': 3e-05, 'weight_decay': 0.0001, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 1.0, 'num_train_epochs': 3.0, 'max_steps': 125, 'lr_scheduler_type': 'linear', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.0, 'warmup_steps': 0, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': './runs/Nov27_18-03-24_f5b68522d064', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 5, 'logging_nan_inf_filter': True, 'save_strategy': 'steps', 'save_steps': 10, 'save_total_limit': 2, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'restore_callback_states_from_checkpoint': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': False, 'fp16': True, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': None, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': 10, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': '.', 'disable_tqdm': False, 'remove_unused_columns': False, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'adamw_torch', 'optim_args': None, 'adafactor': False, 'group_by_length': False, 'length_column_name': 'length', 'report_to': ['wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': True, 'resume_from_checkpoint': None, 'hub_model_id': None, 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': False, 'hub_always_push': False, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'include_for_metrics': [], 'eval_do_concat_batches': True, 'fp16_backend': 'auto', 'evaluation_strategy': 'steps', 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None, 'optim_target_modules': None, 'batch_eval_metrics': True, 'eval_on_start': False, 'use_liger_kernel': False, 'eval_use_gather_object': False, 'average_tokens_across_devices': False}
|
52 |
2024-11-27 18:03:30,125 INFO MainThread:1290 [wandb_config.py:__setitem__():154] config set model/num_parameters = 41501895 - <bound method Run._config_callback of <wandb.sdk.wandb_run.Run object at 0x7ea6083ff9a0>>
|
53 |
2024-11-27 18:03:30,125 INFO MainThread:1290 [wandb_run.py:_config_callback():1387] config_cb model/num_parameters 41501895 None
|
|
|
|
|
|
|
|
51 |
2024-11-27 18:03:30,121 INFO MainThread:1290 [wandb_run.py:_config_callback():1387] config_cb None None {'use_timm_backbone': True, 'backbone_config': None, 'num_channels': 3, 'num_queries': 100, 'd_model': 256, 'encoder_ffn_dim': 2048, 'encoder_layers': 6, 'encoder_attention_heads': 8, 'decoder_ffn_dim': 2048, 'decoder_layers': 6, 'decoder_attention_heads': 8, 'dropout': 0.1, 'attention_dropout': 0.0, 'activation_dropout': 0.0, 'activation_function': 'relu', 'init_std': 0.02, 'init_xavier_std': 1.0, 'encoder_layerdrop': 0.0, 'decoder_layerdrop': 0.0, 'num_hidden_layers': 6, 'auxiliary_loss': False, 'position_embedding_type': 'sine', 'backbone': 'resnet50', 'use_pretrained_backbone': True, 'backbone_kwargs': {'output_stride': 16, 'out_indices': [1, 2, 3, 4], 'in_chans': 3}, 'dilation': True, 'class_cost': 1, 'bbox_cost': 5, 'giou_cost': 2, 'mask_loss_coefficient': 1, 'dice_loss_coefficient': 1, 'bbox_loss_coefficient': 5, 'giou_loss_coefficient': 2, 'eos_coefficient': 0.1, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': None, 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': True, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['DetrForObjectDetection'], 'finetuning_task': None, 'id2label': {0: 'object', 1: 'balloon'}, 'label2id': {'object': 0, 'balloon': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': None, 'eos_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'facebook/detr-resnet-50-dc5', '_attn_implementation_autoset': True, 'transformers_version': '4.46.3', 'classifier_dropout': 0.0, 'max_position_embeddings': 1024, 'model_type': 'detr', 'scale_embedding': False, 'output_dir': '.', 'overwrite_output_dir': False, 'do_train': False, 'do_eval': True, 'do_predict': False, 'eval_strategy': 'steps', 'prediction_loss_only': False, 'per_device_train_batch_size': 4, 'per_device_eval_batch_size': 4, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 1, 'eval_accumulation_steps': None, 'eval_delay': 0, 'torch_empty_cache_steps': None, 'learning_rate': 3e-05, 'weight_decay': 0.0001, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 1.0, 'num_train_epochs': 3.0, 'max_steps': 125, 'lr_scheduler_type': 'linear', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.0, 'warmup_steps': 0, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': './runs/Nov27_18-03-24_f5b68522d064', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 5, 'logging_nan_inf_filter': True, 'save_strategy': 'steps', 'save_steps': 10, 'save_total_limit': 2, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'restore_callback_states_from_checkpoint': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': False, 'fp16': True, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': None, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': 10, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': '.', 'disable_tqdm': False, 'remove_unused_columns': False, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'adamw_torch', 'optim_args': None, 'adafactor': False, 'group_by_length': False, 'length_column_name': 'length', 'report_to': ['wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': True, 'resume_from_checkpoint': None, 'hub_model_id': None, 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': False, 'hub_always_push': False, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'include_for_metrics': [], 'eval_do_concat_batches': True, 'fp16_backend': 'auto', 'evaluation_strategy': 'steps', 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None, 'optim_target_modules': None, 'batch_eval_metrics': True, 'eval_on_start': False, 'use_liger_kernel': False, 'eval_use_gather_object': False, 'average_tokens_across_devices': False}
|
52 |
2024-11-27 18:03:30,125 INFO MainThread:1290 [wandb_config.py:__setitem__():154] config set model/num_parameters = 41501895 - <bound method Run._config_callback of <wandb.sdk.wandb_run.Run object at 0x7ea6083ff9a0>>
|
53 |
2024-11-27 18:03:30,125 INFO MainThread:1290 [wandb_run.py:_config_callback():1387] config_cb model/num_parameters 41501895 None
|
54 |
+
2024-11-27 18:07:29,761 INFO MainThread:1290 [jupyter.py:save_ipynb():387] not saving jupyter notebook
|
55 |
+
2024-11-27 18:07:29,761 INFO MainThread:1290 [wandb_init.py:_pause_backend():444] pausing backend
|
56 |
+
2024-11-27 18:07:47,559 INFO MainThread:1290 [wandb_init.py:_resume_backend():449] resuming backend
|