See axolotl config
axolotl version: 0.5.2
base_model: meta-llama/Llama-3.1-8B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
tokenizer_use_fast: false
resize_token_embeddings_to_32x: false
flash_attention: true
xformers_attention:
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: skymizer/Llama3.1-base-tokenized-dolma-v1_7-50B
train_on_split: train
type: completion
test_datasets:
- path: skymizer/Llama3.1-tokenized-dolma-v1_7-test
split: test
type: completion
is_preprocess: true
skip_prepare_dataset: true
dataset_prepared_path: /mnt/home/model-team/datasets/pretokenized/Llama3.1-8B-base-tokenized-dolma-v1_7_50B-4096
hf_use_auth_token: true
output_dir: /mnt/home/model-team/models/Llama3.1-8B-v0.1-STE-0.6
resume_from_checkpoint:
auto_resume_from_checkpoints: true
sequence_len: 4096
sample_packing: true
sample_packing_group_size: 100000
sample_packing_bin_size: 200
pad_to_sequence_len: true
eval_sample_packing: false
# eval_causal_lm_metrics: ["perplexity"]
wandb_project: "sparse-tuning-cpt"
wandb_entity:
wandb_watch:
wandb_name: "Llama3.1-8B-v0.1-dolma-STE-0.6"
wandb_log_model:
# global batch size = 2 * 8 * 8 GPUs * 8 Nodes * 4096 = 4M
gradient_accumulation_steps: 8
micro_batch_size: 2
eval_batch_size: 1
max_steps: 10000
optimizer: adamw_torch
learning_rate: 0.00005
lr_scheduler: cosine
cosine_min_lr_ratio: 0.2
weight_decay: 0.0
adam_beta1: 0.9
adam_beta2: 0.95
adam_eps: 0.000001
max_grad_norm: 1.0
train_on_inputs: false
group_by_length: false
bf16: true
fp16:
tf32: false
hub_model_id: "skymizer/Llama3.1-8B-v0.1-dolma-skymizer-method-0.6"
save_strategy: "steps"
save_steps: 500
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
warmup_steps: 375
eval_steps: 500
eval_table_size:
debug:
deepspeed: /root/train/axolotl/deepspeed_configs/zero3_bf16.json
fsdp:
fsdp_config:
seed: 42
special_tokens:
pad_token: "<|end_of_text|>"
Llama3.1-8B-v0.1-dolma-skymizer-method-0.6
This model is a fine-tuned version of meta-llama/Llama-3.1-8B on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 2.3883
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 64
- gradient_accumulation_steps: 8
- total_train_batch_size: 1024
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 375
- training_steps: 10000
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.2837 | 0.0001 | 1 | 2.5425 |
2.2557 | 0.0414 | 500 | 2.4568 |
2.2641 | 0.0829 | 1000 | 2.4520 |
2.2207 | 0.1243 | 1500 | 2.4477 |
2.3003 | 0.1657 | 2000 | 2.4432 |
2.2382 | 0.2072 | 2500 | 2.4388 |
2.2339 | 0.2486 | 3000 | 2.4349 |
2.2517 | 0.2901 | 3500 | 2.4303 |
2.2483 | 0.3315 | 4000 | 2.4246 |
2.2067 | 0.3729 | 4500 | 2.4207 |
2.2485 | 0.4144 | 5000 | 2.4163 |
2.2541 | 0.4558 | 5500 | 2.4123 |
2.2192 | 0.4972 | 6000 | 2.4084 |
2.2346 | 0.5387 | 6500 | 2.4041 |
2.2106 | 0.5801 | 7000 | 2.4010 |
2.2112 | 0.6215 | 7500 | 2.3982 |
2.2215 | 0.6630 | 8000 | 2.3951 |
2.2118 | 0.7044 | 8500 | 2.3924 |
2.1933 | 0.7458 | 9000 | 2.3905 |
2.1813 | 0.7873 | 9500 | 2.3893 |
2.1969 | 0.8287 | 10000 | 2.3883 |
Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
- Downloads last month
- 116
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for skymizer/Llama3.1-8B-v0.1-dolma-skymizer-method-0.6
Base model
meta-llama/Llama-3.1-8B