Model Card for SDXL Finetune GGUF files

This model card is aimed for those who wish to run SDXL models in GGUF quantized format.

Model Description

Due to the steps required to extract UNet and Clip Encoders and convert the UNet model to a GGUF quantized model, I decided to share the models already quantized in GGUF format here. Out of the vast number of SDXL finetuned models, only a few models are currently quantized mostly for my personal use. I will continue to add more as time allows.

Model Licenses

Since each SDXL finetuned model has its own licensing term, you should check the repo of the original finetuned model for licensing information as the quantized version will fall under the same license. The most of the finetuned models can be found on CivitAI.

V_Prediction Custom Node

If you are using a v_prediction model, the quantized model won't be automatically detected as such in ComfyUI. I made a custom node to force the model type to set as v_prediction in ComfyUI for the purpose. V_Prediction Node Link: https://github.com/magekinnarus/ComfyUI-V-Prediction-Node

Model Card Contact

If you are the owner of any finetuned model featured and do not wish for your model to be available in GGUF quantized format, please contact me at: https://www.reddit.com/user/OldFisherman8/

Downloads last month
981
GGUF
Model size
2.57B params
Architecture
sdxl

4-bit

5-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for Old-Fisherman/SDXL_Finetune_GGUF_Files

Quantized
(10)
this model