Latxa 7b Instruct GGUF

Provided files

Name Quant method Bits Size Max RAM required Use case
latxa-7b-v1-instruct-q8_0.gguf 8 bits 7 GB 8,2 GB Fits in a RTX 3060 12Gb
Downloads last month
2
GGUF
Model size
6.74B params
Architecture
llama
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for oldbridge/latxa-7b-instruct-q8

Base model

HiTZ/latxa-7b-v1
Quantized
(1)
this model