iamraafay/deepseek-vl-1.3b-4bitill-qwen-1.5b

This model was converted to MLX format from mlx-community/deepseek-vl-1.3b-4bit using mlx-vlm version 0.1.13. Refer to the original model card for more details on the model.

Use with mlx

pip install -U mlx-vlm
python -m mlx_vlm.generate --model iamraafay/deepseek-vl-1.3b-4bitill-qwen-1.5b --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image>
Downloads last month
26
Safetensors
Model size
310M params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the HF Inference API does not support mlx models with pipeline type image-text-to-text