File size: 2,770 Bytes
c2a6fbc
 
 
 
 
 
 
 
 
 
 
 
 
0b574f5
 
 
 
c2a6fbc
 
 
4565d23
c2a6fbc
5e6a69d
a07016a
 
1e8e3e4
2155618
 
 
1e8e3e4
a07016a
f436e3f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c2a6fbc
 
 
 
0b574f5
 
f436e3f
0b574f5
f436e3f
0b574f5
f436e3f
c2a6fbc
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
base_model: black-forest-labs/FLUX.1-dev
---

*Note that all these models are derivatives of black-forest-labs/FLUX.1-dev and therefore covered by the 
[FLUX.1 [dev] Non-Commercial License](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) license.*

*Some models are derivatives of finetunes, and are included with the permission of the finetuner*

# Optimised Flux GGUF models

A collection of GGUF models using mixed quantization (different layers quantized to different precision to optimise fidelity v. memory).

They were created using the [convert.py script](https://github.com/chrisgoringe/mixed-gguf-converter).

They can be loaded in ComfyUI using the [ComfyUI GGUF Nodes](https://github.com/city96/ComfyUI-GGUF). Just put the gguf files in your
models/unet directory.

## Naming convention (mx for 'mixed')

[original_model_name]_mxN_N.gguf

where N_N is the average number of bits per parameter.

## Good choices to start with
```
-  9_2 is a good choice for 16 GB cards
-  6_9 just fits on a 12 GB card
-  5_9 is comfortable on 12 GB cards
```

## Speed?

On an A40 (plenty of VRAM), everything except the model identical, the time taken to generate an image (30 steps, deis sampler) was:

- 5_1 => 40.1s
- 5_9 => 55.4s
- 6_9 => 52.1s
- 7_4 => 49.7s
- 7_6 => 43.6s
- 8_4 => 46.8s
- 9_2 => 42.8s
- 9_6 => 48.2s
  
for comparison:
- bfloat16 (default) =>
- fp8_e4m3fn =>
- fp8_e5m2 =>



## How is this optimised?

The process for optimisation is as follows:

- 240 prompts used for flux images popular at civit.ai were run through the full Flux.1-dev model with randomised resolution and step count.
- For a randomly selected step in the inference, the hidden states before and after the layer stack were captured.
- For each layer in turn, and for each quantization:
  - A single layer was quantized
  - The initial hidden states were processed by the modified layer stack
  - The error (MSE) in the final hidden state was calculated
- This gives a 'cost' for each possible layer quantization - how much different it is to the full model
- An optimised quantization is one that gives the desired reduction in size for the smallest total cost
  - A series of recipies for optimization have been created from the calculated costs
- the various 'in' blocks, the final layer blocks, and all normalization scale parameters are stored in float32

## Also note

- Tests on using bitsandbytes quantizations showed they did not perform as well as the equivalent sized GGUF quants
- Different quantizations of different parts of a layer gave significantly worse results
- Leaving bias in 16 bit made no relevant difference
- Costs were evaluated for the original Flux.1-dev model. They are assumed to be essentially the same for finetunes