--- tags: - merge --- # Miquella 120B GGUF GGUF quantized weights for [miquella-120b](https://huggingface.co/alpindale/miquella-120b). Contains *all* quants. I used Importance Matrices for the quantization, using random data generated from Q8_0 quant of the model for maximum quality. Due to the limitations of HF's file size, the larger files were split into multiple chunks. Instructions below. ## Linux Example uses Q3_K_L. Replace the names appropriately for your quant of choice. ```sh cat miquella-120b.Q3_K_L.gguf_part_* > miquella-120b.Q3_K_L.gguf && rm miquella-120b.Q3_K_L.gguf_part_* ``` ## Windows Example uses Q3_K_L. Replace the names appropriately for your quant of choice. ```sh COPY /B miquella-120b.Q3_K_L.gguf_part_aa + miquella-120b.Q3_K_L.gguf_part_ab miquella-120b.Q3_K_L.gguf ``` Then delete the two splits.