Information

OpenAssistant-Alpaca-13B-4-bit working with GPTQ versions used in Oobabooga's Text Generation Webui and KoboldAI.
This was made using https://huggingface.co/chavinlo/alpaca-13b and Serpdotai's Open Assistant 13b LoRa trained for 4 epochs using Open Assistant's dataset.

python llama.py /Models/alpaca13b-oaast4ep-lora c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors oasst-alpaca13b-4ep-lora-4bit-128g.safetensors

Benchmarks

--true-sequential --groupsize 128

Wikitext2: 6.854333400726318

Ptb-New: 12.411578178405762

C4-New: 9.355494499206543

Note: This version uses --groupsize 128, resulting in better evaluations.

Downloads last month
10
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Dataset used to train Monero/oasst-alpaca13b-4epoch-4bit-128g