Supa-AI/gemma2-9b-cpt-sahabatai-v1-instruct-q8_0-gguf

This model was converted to GGUF format from GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct using llama.cpp. Refer to the original model card for more details on the model.

Use with llama.cpp

CLI:

llama-cli --hf-repo Supa-AI/gemma2-9b-cpt-sahabatai-v1-instruct-q8_0-gguf --hf-file gemma2-9b-cpt-sahabatai-v1-instruct.q8_0.gguf -p "Your prompt here"

Server:

llama-server --hf-repo Supa-AI/gemma2-9b-cpt-sahabatai-v1-instruct-q8_0-gguf --hf-file gemma2-9b-cpt-sahabatai-v1-instruct.q8_0.gguf -c 2048

Model Details

Downloads last month
6
GGUF
Model size
9.24B params
Architecture
gemma2

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for Supa-AI/gemma2-9b-cpt-sahabatai-v1-instruct-q8_0-gguf