Note:

This repo hosts only a Q5_K_S iMatrix of Poppy Porpoise 0.72 L3 8B. GGUF quant is from Lewdiculous/Poppy_Porpoise-0.72-L3-8B-GGUF-IQ-Imatrix. The additional files in this GGUF repo is for personal usage using Text Gen Webui with llamacpp_hf.

Downloads last month
31
GGUF
Model size
8.03B params
Architecture
llama

5-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.