Configuration Parsing
Warning:
In config.json: "quantization_config.load_in_4bit" must be a boolean
models/loras2/7bdb17d0-3f6b-4921-93db-0f46c4d9d81b
This model is a fine-tuned version of OpenPipe/mistral-ft-optimized-1227 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.0179
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.4795 | 0.02 | 1 | 0.4746 |
0.0282 | 0.2 | 12 | 0.0309 |
0.0168 | 0.4 | 24 | 0.0242 |
0.0216 | 0.59 | 36 | 0.0208 |
0.0167 | 0.79 | 48 | 0.0189 |
0.0157 | 0.99 | 60 | 0.0186 |
0.0156 | 1.19 | 72 | 0.0177 |
0.0135 | 1.38 | 84 | 0.0182 |
0.0139 | 1.58 | 96 | 0.0178 |
0.0169 | 1.78 | 108 | 0.0178 |
0.0111 | 1.98 | 120 | 0.0179 |
Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.6
- Tokenizers 0.14.1
- Downloads last month
- 5
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for corbt/example-mistral-lora
Base model
mistralai/Mistral-7B-v0.1
Finetuned
Intel/neural-chat-7b-v3-1
Finetuned
Intel/neural-chat-7b-v3-3
Finetuned
OpenPipe/mistral-ft-optimized-1227