Update README.md
Browse files
README.md
CHANGED
@@ -22,8 +22,20 @@ tags:
|
|
22 |
6. Get the following image.
|
23 |

|
24 |
|
25 |
-
## How to make
|
26 |
I used sd-scripts. The parameters is as follows:
|
27 |
```bash
|
28 |
accelerate launch --num_cpu_threads_per_process 1 flux_train_network.py --pretrained_model_name_or_path '/mnt/NVM/flux/flux1-dev.safetensors' --clip_l '/mnt/NVM/flux/clip_l.safetensors' --t5xxl '/mnt/NVM/flux/t5xxl_fp16.safetensors' --ae '/mnt/NVM/flux/ae.safetensors' --cache_latents --save_model_as safetensors --sdpa --persistent_data_loader_workers --max_data_loader_n_workers 2 --seed 42 --gradient_checkpointing --save_precision bf16 --network_module networks.lora_flux --network_dim 16 --network_alpha 16 --optimizer_type adamw8bit --learning_rate 1e-3 --network_train_unet_only --cache_text_encoder_outputs --cache_text_encoder_outputs --max_train_epochs 3 --save_every_n_epochs 1 --dataset_config flux_lora.toml --output_dir /mnt/NVM/flux --output_name flux_lora --timestep_sampling sigmoid --model_prediction_type raw --discrete_flow_shift 3.0 --guidance_scale 1.0 --loss_type l2 --mixed_precision bf16 --full_bf16 --max_bucket_reso 2048 --min_bucket_reso 512 --apply_t5_attn_mask --lr_scheduler cosine --lr_warmup_steps 10
|
29 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
6. Get the following image.
|
23 |

|
24 |
|
25 |
+
## How to make the LoRA Adapter
|
26 |
I used sd-scripts. The parameters is as follows:
|
27 |
```bash
|
28 |
accelerate launch --num_cpu_threads_per_process 1 flux_train_network.py --pretrained_model_name_or_path '/mnt/NVM/flux/flux1-dev.safetensors' --clip_l '/mnt/NVM/flux/clip_l.safetensors' --t5xxl '/mnt/NVM/flux/t5xxl_fp16.safetensors' --ae '/mnt/NVM/flux/ae.safetensors' --cache_latents --save_model_as safetensors --sdpa --persistent_data_loader_workers --max_data_loader_n_workers 2 --seed 42 --gradient_checkpointing --save_precision bf16 --network_module networks.lora_flux --network_dim 16 --network_alpha 16 --optimizer_type adamw8bit --learning_rate 1e-3 --network_train_unet_only --cache_text_encoder_outputs --cache_text_encoder_outputs --max_train_epochs 3 --save_every_n_epochs 1 --dataset_config flux_lora.toml --output_dir /mnt/NVM/flux --output_name flux_lora --timestep_sampling sigmoid --model_prediction_type raw --discrete_flow_shift 3.0 --guidance_scale 1.0 --loss_type l2 --mixed_precision bf16 --full_bf16 --max_bucket_reso 2048 --min_bucket_reso 512 --apply_t5_attn_mask --lr_scheduler cosine --lr_warmup_steps 10
|
29 |
+
```
|
30 |
+
```toml
|
31 |
+
[general]
|
32 |
+
enable_bucket = true
|
33 |
+
|
34 |
+
[[datasets]]
|
35 |
+
resolution = 1024
|
36 |
+
batch_size = 4
|
37 |
+
|
38 |
+
[[datasets.subsets]]
|
39 |
+
image_dir = '/mnt/NVM/flux_lora'
|
40 |
+
metadata_file = 'flux_lora.json'
|
41 |
+
```
|