Update README.md
Browse files
README.md
CHANGED
@@ -17,13 +17,12 @@ sd-wikiart-v2 is a stable diffusion model that has been fine-tuned on the [wikia
|
|
17 |
|
18 |
[](https://colab.research.google.com/drive/1i7HJlTzVPEirNedDV-TcR5Ok2_8QI6zC?usp=sharing)
|
19 |
|
20 |
-
<img src=https://cdn.discordapp.com/attachments/930559077170421800/1017265913231327283/unknown.png width=40% height=40%>
|
21 |
|
22 |
## Model Description
|
23 |
|
24 |
The model originally used for fine-tuning is [Stable Diffusion V1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4), which is a latent image diffusion model trained on [LAION2B-en](https://huggingface.co/datasets/laion/laion2B-en).
|
25 |
|
26 |
-
The current model has been fine-tuned with a learning rate of 1e-05 for 1 epoch on 81K text-image pairs from wikiart dataset. Only the attention layers of the model are fine-tuned
|
27 |
|
28 |
## Training Data
|
29 |
TODO
|
@@ -57,7 +56,7 @@ pipe = StableDiffusionPipeline.from_pretrained(
|
|
57 |
)
|
58 |
pipe = pipe.to(device)
|
59 |
|
60 |
-
prompt = "a painting of eiffel tower in the style of
|
61 |
with torch.autocast("cuda"):
|
62 |
image = pipe(prompt, guidance_scale=7.5).images[0]
|
63 |
|
|
|
17 |
|
18 |
[](https://colab.research.google.com/drive/1i7HJlTzVPEirNedDV-TcR5Ok2_8QI6zC?usp=sharing)
|
19 |
|
|
|
20 |
|
21 |
## Model Description
|
22 |
|
23 |
The model originally used for fine-tuning is [Stable Diffusion V1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4), which is a latent image diffusion model trained on [LAION2B-en](https://huggingface.co/datasets/laion/laion2B-en).
|
24 |
|
25 |
+
The current model has been fine-tuned with a learning rate of 1e-05 for 1 epoch on 81K text-image pairs from wikiart dataset. Only the attention layers of the model are fine-tuned. This is done to avoid catastrophic forgetting, the model can generate artistic images given specific prompts but still retains most of its previous knowledge.
|
26 |
|
27 |
## Training Data
|
28 |
TODO
|
|
|
56 |
)
|
57 |
pipe = pipe.to(device)
|
58 |
|
59 |
+
prompt = "a painting of eiffel tower in the style of surrealism"
|
60 |
with torch.autocast("cuda"):
|
61 |
image = pipe(prompt, guidance_scale=7.5).images[0]
|
62 |
|