GPTQ 4-bit no actor version for compatibility that works in textgen-webui

Generated by using scripts from https://gitee.com/yhyu13/llama_-tools

Original weight : https://huggingface.co/project-baize/baize-v2-7b

Baize is a lora training framework that allows fine-tuning LLaMA models on commondity GPUs.

Checkout my 13B Baize gptq 4bit here : https://huggingface.co/Yhyu13/baize-v2-13b-gptq-4bit

This model has an interesting behavior of generating further questions that help the conversation going. You can simply hit "Continue" and the chat goes all by itself

img

Downloads last month
5
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.