root/workspace/outputs/9/c78b946e-a65b-41ea-9a2b-cbb0c4234e12
This model is a fine-tuned version of NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer on the /root/workspace/input_data/8e5bb7bd2a44af4c_train_data.json dataset. It achieves the following results on the evaluation set:
- Loss: 2.0082
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Framework versions
- PEFT 0.14.0
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 7
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.