Latxa 7b GGUF

Provided files

Name Quant method Bits Size Max RAM required Use case
latxa-7b-v1.gguf 26 GB
latxa-7b-v1-f16.gguf 13 GB
latxa-7b-v1-q8_0.gguf Q8_0 6,7 GB
Downloads last month
10
GGUF
Model size
6.74B params
Architecture
llama

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for xezpeleta/latxa-7b-v1-gguf

Base model

HiTZ/latxa-7b-v1
Quantized
(2)
this model

Space using xezpeleta/latxa-7b-v1-gguf 1