The original model is here: https://huggingface.co/Lin-Chen/ShareGPT4V-7B
This is the K-type quantized variant (for inference with llama.cpp llava-cli)

In my tests this is currently the best llava based vision model

Downloads last month
30
GGUF

6-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.