Zongsheng commited on
Commit
293038b
·
1 Parent(s): 0753791

add model card

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md CHANGED
@@ -1,3 +1,51 @@
1
  ---
2
  license: other
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
+ tags:
4
+ - pytorch
5
+ - diffusers
6
+ - face image enhancement
7
  ---
8
+
9
+ # DifFace: Blind Face Restoration with Diffused Error Contraction
10
+
11
+ **Paper**: [DifFace: Blind Face Restoration with Diffused Error Contraction](https://arxiv.org/abs/2212.06512)
12
+
13
+ **Authors**: Zongsheng Yue, Chen Change Loy
14
+
15
+ **Abstract**:
16
+
17
+ *While deep learning-based methods for blind face restoration have achieved unprecedented success, they still suffer from two major limitations. First, most of them deteriorate when facing complex degradations out of their training data. Second, these methods require multiple constraints, e.g., fidelity, perceptual, and adversarial losses, which require laborious hyper-parameter tuning to stabilize and balance their influences. In this work, we propose a novel method named DifFace that is capable of coping with unseen and complex degradations more gracefully without complicated loss designs. The key of our method is to establish a posterior distribution from the observed low-quality (LQ) image to its high-quality (HQ) counterpart. In particular, we design a transition distribution from the LQ image to the intermediate state of a pre-trained diffusion model and then gradually transmit from this intermediate state to the HQ target by recursively applying a pre-trained diffusion model. The transition distribution only relies on a restoration backbone that is trained with L2 loss on some synthetic data, which favorably avoids the cumbersome training process in existing methods. Moreover, the transition distribution can contract the error of the restoration backbone and thus makes our method more robust to unknown degradations. Comprehensive experiments show that DifFace is superior to current state-of-the-art methods, especially in cases with severe degradations.*
18
+
19
+ ## Inference
20
+
21
+ ```python
22
+ # !pip install diffusers
23
+ from diffusers import DifFacePipeline
24
+
25
+ model_id = "OAOA/DifFace"
26
+
27
+ # load model and scheduler
28
+ pipe = DifFacePipeline.from_pretrained(model_id)
29
+ pipe = pipe.to("cuda")
30
+
31
+ im_lr = cv2.imread(im_path) # read the low quality face image
32
+
33
+ im_sr = pipe(im_lr, num_inference_steps=250, started_steps=100, aligned=True)['images'][0]
34
+
35
+ image[0].save("restorated_difface.png") # save the result
36
+ ```
37
+
38
+ <!--For more in-detail information, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)-->
39
+
40
+ ## Training
41
+
42
+ If you want to train your own model, please have a look at the [official training example](https://github.com/zsyOAOA/DifFace).
43
+
44
+ ## Samples
45
+ [<img src="assets/Solvay_conference.png" width="805px"/>](https://imgsli.com/MTM5NTgw)
46
+ [<img src="assets/Hepburn.png" height="555px" width="400px"/>](https://imgsli.com/MTM5NTc5) [<img src="assets/oldimg_05.png" height="555px" width="400px"/>](https://imgsli.com/MTM5NTgy)
47
+
48
+ <img src="testdata/cropped_faces/0368.png" height="200px" width="200px"/><img src="assets/0368.png" height="200px" width="200px"/> <img src="testdata/cropped_faces/0885.png" height="200px" width="200px"/><img src="assets/0885.png" height="200px" width="200px"/>
49
+
50
+ <img src="testdata/cropped_faces/0729.png" height="200px" width="200px"/><img src="assets/0729.png" height="200px" width="200px"/> <img src="testdata/cropped_faces/0934.png" height="200px" width="200px"/><img src="assets/0934.png" height="200px" width="200px"/>
51
+