uncensored
VicUnLocked-13b-LoRA / README.txt
Neko-Institute-of-Science's picture
I am at training full context.
5b1e881
raw
history blame
527 Bytes
---
datasets:
- gozfarb/ShareGPT_Vicuna_unfiltered
---
# Convert tools
https://github.com/practicaldreamer/vicuna_to_alpaca
# Training on
https://github.com/oobabooga/text-generation-webui
ATM I'm using v4.3 of the dataset and training full context.
This LoRA is already pretty functional but far from finished training. ETA from the start 200 hours.
To use this LoRA please replace the config files to ones of Vicuna and I will have them here. Other than that use normal llama then replace the config files then load LoRA.