mariana-coelho-9's picture
Update README.md
814b0fd verified
metadata
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
language:
  - en
license: apache-2.0
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - trl
datasets:
  - mariana-coelho-9/icd-10

Uploaded model

  • Developed by: mariana-coelho-9
  • License: apache-2.0
  • Finetuned from model : unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Dataset Information

Dataset used: mariana-coelho-9/icd-10

  • Training set size: 28261
  • Validation set size: 3141

Memory and Time Statistics

GPU = Tesla T4.

  • Max memory = 14.748 GB.
  • Peak reserved memory = 7.717 GB.
  • Peak reserved memory for training = 0.0 GB.
  • Peak reserved memory % of max memory = 52.326 %.
  • Peak reserved memory for training % of max memory = 0.0 %.
  • 2033.5873 seconds used for training.
  • 33.89 minutes used for training.

Hyperparameters

Hyperparameters:

  • 'r': 16,
  • 'target_modules': ['q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj']
  • 'lora_alpha': 16
  • 'lora_dropout': 0
  • 'bias': 'none'
  • 'use_gradient_checkpointing': 'unsloth'
  • 'random_state': 3407
  • 'use_rslora': False
  • 'loftq_config': None
  • 'per_device_train_batch_size': 2
  • 'gradient_accumulation_steps': 4
  • 'warmup_steps': 5, 'max_steps': 60
  • 'learning_rate': 0.0002
  • 'fp16': True
  • 'bf16': False
  • 'logging_steps': 10
  • 'optim': 'adamw_8bit'
  • 'weight_decay': 0.01
  • 'lr_scheduler_type': 'linear'
  • 'seed': 3407