Base model: https://huggingface.co/WizardLM/WizardLM-13B-V1.2

Model trained on the following data: https://huggingface.co/datasets/gmongaras/reddit_negative

Trained for about 600 steps with a batch size of 6, 3 accumulation steps, and using LoRA adapters on all layers.

Downloads last month
16
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.