Built with Axolotl

See axolotl config

axolotl version: 0.6.0

base_model: Qwen/Qwen2.5-72B
hub_model_id: sumuks/purple-wintermute-0.2-72b
trust_remote_code: true

load_in_8bit: false
load_in_4bit: false
strict: false
bf16: true
hf_use_auth_token: true

plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_layer_norm: true
liger_fused_linear_cross_entropy: true
save_safetensors:

datasets:
  - path: sumuks/openreview_wintermute_0.2_training_data
    type: completion
    field: text
dataset_prepared_path: .axolotl_cache_data/wintermute_0.2
shuffle_merged_datasets: true
# dataset_exact_deduplication: true
val_set_size: 0.005
output_dir: ./../../outputs/purple-wintermute-0.2-72b
push_dataset_to_hub: sumuks/purple_wintermute_0.2_training_data_in_progress

sequence_length: 2048
sample_packing: true
pad_to_sequence_len: true

adapter: lora
lora_r: 256
lora_alpha: 32
lora_dropout: 0.05
peft_use_rslora: true
lora_target_linear: true

gradient_accumulation_steps: 4
micro_batch_size: 8
eval_batch_size: 1
num_epochs: 3
learning_rate: 5e-5
warmup_ratio: 0.05
evals_per_epoch: 3
saves_per_epoch: 5
gradient_checkpointing: true
lr_scheduler: cosine
optimizer: paged_adamw_8bit

profiler_steps: 100
save_safetensors: true
train_on_inputs: true
wandb_project: wintermute 
wandb_name: purple-wintermute-0.2-72b
deepspeed: deepspeed_configs/zero3_bf16.json

purple-wintermute-0.2-72b

This model is a fine-tuned version of Qwen/Qwen2.5-72B on the sumuks/openreview_wintermute_0.2_training_data dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3017

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 256
  • total_eval_batch_size: 8
  • optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 388
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
No log 0.0004 1 2.5112
1.3654 0.3333 864 1.6504
0.9929 0.6665 1728 1.4144
0.9039 0.9998 2592 1.3083
0.8161 1.3333 3456 1.2935
0.7815 1.6665 4320 1.2816
0.7658 1.9998 5184 1.2775
0.7004 2.3333 6048 1.2995
0.6694 2.6665 6912 1.3013
0.6798 2.9998 7776 1.3017

Framework versions

  • PEFT 0.14.0
  • Transformers 4.47.1
  • Pytorch 2.5.1
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
31
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for sumuks/purple-wintermute-0.2-72b

Base model

Qwen/Qwen2.5-72B
Adapter
(3)
this model

Dataset used to train sumuks/purple-wintermute-0.2-72b