Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
ikawrakow
/
mixtral-8x7b-quantized-gguf
like
7
GGUF
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
2
Deploy
Use this model
558135c
mixtral-8x7b-quantized-gguf
1 contributor
History:
3 commits
ikawrakow
Update README.md
558135c
about 1 year ago
.gitattributes
Safe
1.56 kB
Adding Mixtral quantized models
about 1 year ago
README.md
Safe
1.53 kB
Update README.md
about 1 year ago
mixtral-8x7b-q2k.gguf
Safe
15.4 GB
LFS
Adding Mixtral quantized models
about 1 year ago
mixtral-8x7b-q3k-medium.gguf
Safe
22.4 GB
LFS
Adding Mixtral quantized models
about 1 year ago
mixtral-8x7b-q3k-small.gguf
Safe
20.3 GB
LFS
Adding Mixtral quantized models
about 1 year ago
mixtral-8x7b-q4k-medium.gguf
Safe
28.4 GB
LFS
Adding Mixtral quantized models
about 1 year ago
mixtral-8x7b-q4k-small.gguf
Safe
26.7 GB
LFS
Adding Mixtral quantized models
about 1 year ago
mixtral-8x7b-q5k-small.gguf
Safe
32.2 GB
LFS
Adding Mixtral quantized models
about 1 year ago