Fine tuned version of xtuner/llava-llama-3-8b-v1_1 on gokaygokay/random_instruct_docci dataset.

pip install git+https://github.com/haotian-liu/LLaVA.git --no-deps
pip install lmdeploy
# Google Colab Error Fix
import nest_asyncio
nest_asyncio.apply()
from lmdeploy import pipeline
from lmdeploy.vl import load_image

pipe = pipeline('gokaygokay/llava-llama3-docci')

image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
response = pipe(('describe this image', image))
print(response)
Downloads last month
24
Safetensors
Model size
8.35B params
Tensor type
FP16
·
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train gokaygokay/llava-llama3-docci