About
GGUF Quantizations of https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0
Provided Quantizations
Link | Type |
---|---|
GGUF | Q2_K |
GGUF | Q3_K_S |
GGUF | Q3_K_M |
GGUF | Q3_K_L |
GGUF | Q4_0 |
GGUF | Q4_K_S |
GGUF | Q4_K_M |
GGUF | Q5_0 |
GGUF | Q5_K_S |
GGUF | Q5_K_M |
GGUF | Q6_K |
GGUF | Q8_0 |
In a circular citation, I borrowed the format of this file from https://huggingface.co/mradermacher/copy_of_wildjailbreak_13-GGUF.
- Downloads last month
- 13
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.
Model tree for larenspear/TinyLlama-1.1B-Chat-v1.0-GGUF
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0