GPTQ 4-bit no actor version for compatibility that works in textgen-webui

Generated by using scripts from https://gitee.com/yhyu13/llama_-tools

Original weight : https://huggingface.co/project-baize/baize-v2-7b

Baize is a lora training framework that allows fine-tuning LLaMA models on commondity GPUs.

Checkout my 7B baize gptq 4bit here : https://huggingface.co/Yhyu13/baize-v2-7b-gptq-4bit

Downloads last month
3
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.