roleplaiapp/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-gguf-Q8_0-GGUF

Repo: roleplaiapp/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-gguf-Q8_0-GGUF Original Model: Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-gguf Quantized File: M-MOE-4X7B-Dark-MultiVerse-UC-E32-24B-D_AU-Q8_0.gguf Quantization: GGUF Quantization Method: Q8_0

Overview

This is a GGUF Q8_0 quantized version of Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-gguf

Quantization By

I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai.

Downloads last month
1,503
GGUF
Model size
24.2B params
Architecture
llama

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.