---
library_name: transformers
license: apache-2.0
base_model: JackFram/llama-68m
tags:
- axolotl
- generated_from_trainer
datasets:
- argilla/databricks-dolly-15k-curated-en
model-index:
- name: llama-68m
results: []
---
[
](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config
axolotl version: `0.6.0`
```yaml
base_model: JackFram/llama-68m
batch_size: 64
bf16: true
chat_template: tokenizer_default_fallback_alpaca
datasets:
- format: custom
path: argilla/databricks-dolly-15k-curated-en
type:
field_input: original-instruction
field_instruction: original-instruction
field_output: original-response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
device_map: auto
eval_sample_packing: false
eval_steps: 50
flash_attention: true
gradient_checkpointing: true
group_by_length: true
hub_model_id: SystemAdmin123/llama-68m
hub_strategy: checkpoint
learning_rate: 0.0002
logging_steps: 10
lr_scheduler: cosine
max_steps: 5000
micro_batch_size: 32
model_type: AutoModelForCausalLM
num_epochs: 100
optimizer: adamw_bnb_8bit
output_dir: /root/.sn56/axolotl/tmp/llama-68m
pad_to_sequence_len: true
resize_token_embeddings_to_32x: false
sample_packing: true
save_steps: 50
save_total_limit: 2
sequence_len: 2048
special_tokens:
pad_token:
tokenizer_type: LlamaTokenizerFast
torch_dtype: bf16
trust_remote_code: true
val_set_size: 0.1
wandb_entity: ''
wandb_mode: online
wandb_name: JackFram/llama-68m-argilla/databricks-dolly-15k-curated-en
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: default
warmup_ratio: 0.05
```
# llama-68m
This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the argilla/databricks-dolly-15k-curated-en dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 600
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| No log | 0.0769 | 1 | 3.9168 |
| 2.5978 | 3.8462 | 50 | 2.8149 |
| 2.0808 | 7.6923 | 100 | 2.9664 |
| 1.6294 | 11.5385 | 150 | 3.2337 |
| 1.2699 | 15.3846 | 200 | 3.5217 |
| 1.0092 | 19.2308 | 250 | 3.7262 |
| 0.8392 | 23.0769 | 300 | 3.8683 |
| 0.7428 | 26.9231 | 350 | 3.9435 |
| 0.6952 | 30.7692 | 400 | 3.9860 |
| 0.6762 | 34.6154 | 450 | 3.9990 |
| 0.6739 | 38.4615 | 500 | 4.0167 |
| 0.6691 | 42.3077 | 550 | 4.0208 |
| 0.6667 | 46.1538 | 600 | 4.0103 |
### Framework versions
- Transformers 4.48.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0