MakiPan commited on
Commit
bd81c24
·
1 Parent(s): 9e7c3db

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +17 -17
app.py CHANGED
@@ -216,8 +216,19 @@ def infer(prompt, negative_prompt, image, model_type="Standard"):
216
  with gr.Blocks(theme='gradio/soft') as demo:
217
  gr.Markdown("## Stable Diffusion with Hand Control")
218
  gr.Markdown("This model is a ControlNet model using MediaPipe hand landmarks for control.")
 
 
 
 
 
 
 
 
 
 
 
219
  model_type = gr.Radio(["Standard", "Hand Encoding"], value="Standard", label="Model preprocessing", info="We developed two models, one with standard MediaPipe landmarks, and one with different (but similar) coloring on palm landmarks to distinguish left and right")
220
-
221
  with gr.Row():
222
  with gr.Column():
223
  prompt_input = gr.Textbox(label="Prompt")
@@ -265,8 +276,8 @@ with gr.Blocks(theme='gradio/soft') as demo:
265
  cache_examples=True,
266
  )
267
 
268
- inputs = [prompt_input, negative_prompt, input_image, model_type]
269
- submit_btn.click(fn=infer, inputs=inputs, outputs=[output_image])
270
 
271
  gr.Markdown("""
272
  <center><h1>Summary</h1></center>
@@ -277,7 +288,7 @@ We opted to use the [HAnd Gesture Recognition Image Dataset](https://github.com/
277
  <br>
278
  To preprocess the data there were three options we considered:
279
  <br>
280
- -The first was to use Mediapipes built-in draw landmarks function. This was an obvious first choice however we noticed with low training steps that the model couldn't easily distinguish handedness and would often generate the wrong hand for the conditioning image.<center>
281
  <br>
282
  <table><tr>
283
  <td>
@@ -297,7 +308,7 @@ To preprocess the data there were three options we considered:
297
  </tr></table>
298
  </center>
299
  <br>
300
- -To counter this issue we changed the palm landmark colors with the intention to keep the color similar in order to learn that they provide similar information, but different to make the model know which hands were left or right.<center>
301
  <br>
302
  <table><tr>
303
  <td>
@@ -317,20 +328,9 @@ To preprocess the data there were three options we considered:
317
  </tr></table>
318
  </center>
319
  <br>
320
- -The last option was to use [MediaPipe Holistic](https://ai.googleblog.com/2020/12/mediapipe-holistic-simultaneous-face.html) to provide pose face and hand landmarks to the ControlNet. This method was promising in theory, however, the HaGRID dataset was not suitable for this method as the Holistic model performs poorly with partial body and obscurely cropped images.
321
  <br>
322
  We anecdotally determined that when trained at lower steps the encoded hand model performed better than the standard MediaPipe model due to implied handedness. We theorize that with a larger dataset of more full-body hand and pose classifications, Holistic landmarks will provide the best images in the future however for the moment the hand encoded model performs best. """)
323
 
324
- gr.Markdown("""
325
- <center><h2><b>LINKS 🔗</b></h2>
326
- <h3 style="text-align: center;"><a href="https://huggingface.co/Vincent-luo/controlnet-hands">Standard Model Link</a></h3>
327
- <h3 style="text-align: center;"> <a href="https://huggingface.co/MakiPan/controlnet-encoded-hands-130k/">Model using Hand Encoding</a></h3>
328
- <br>
329
- <h3 style="text-align: center;"> <a href="https://huggingface.co/datasets/MakiPan/hagrid250k-blip2">Dataset Used To Train the Standard Model</a></h3>
330
- <h3 style="text-align: center;"> <a href="https://huggingface.co/datasets/MakiPan/hagrid-hand-enc-250k">Dataset Used To Train the Hand Encoding Model</a></h3>
331
- <br>
332
- <h3 style="text-align: center;"> <a href="https://github.com/Maki-DS/Jax-Controlnet-hand-training/blob/main/normal-preprocessing.py">Standard Data Preprocessing Script</a></h3>
333
- <h3 style="text-align: center;"> <a href="https://github.com/Maki-DS/Jax-Controlnet-hand-training/blob/main/Hand-encoded-preprocessing.py">Hand Encoding Data Preprocessing Script</a></h3></center>
334
- """)
335
 
336
  demo.launch()
 
216
  with gr.Blocks(theme='gradio/soft') as demo:
217
  gr.Markdown("## Stable Diffusion with Hand Control")
218
  gr.Markdown("This model is a ControlNet model using MediaPipe hand landmarks for control.")
219
+ gr.Markdown("""
220
+ <center><h2><b>LINKS 🔗</b></h2>
221
+ <h4 style="text-align: center;"><a href="https://huggingface.co/Vincent-luo/controlnet-hands">Standard Model Link</a></h4>
222
+ <h4 style="text-align: center;"> <a href="https://huggingface.co/MakiPan/controlnet-encoded-hands-130k/">Model using Hand Encoding</a></h4>
223
+ <br>
224
+ <h4 style="text-align: center;"> <a href="https://huggingface.co/datasets/MakiPan/hagrid250k-blip2">Dataset Used To Train the Standard Model</a></h4>
225
+ <h4 style="text-align: center;"> <a href="https://huggingface.co/datasets/MakiPan/hagrid-hand-enc-250k">Dataset Used To Train the Hand Encoding Model</a></h4>
226
+ <br>
227
+ <h4 style="text-align: center;"> <a href="https://github.com/Maki-DS/Jax-Controlnet-hand-training/blob/main/normal-preprocessing.py">Standard Data Preprocessing Script</a></h4>
228
+ <h4 style="text-align: center;"> <a href="https://github.com/Maki-DS/Jax-Controlnet-hand-training/blob/main/Hand-encoded-preprocessing.py">Hand Encoding Data Preprocessing Script</a></h4></center>
229
+ """)
230
  model_type = gr.Radio(["Standard", "Hand Encoding"], value="Standard", label="Model preprocessing", info="We developed two models, one with standard MediaPipe landmarks, and one with different (but similar) coloring on palm landmarks to distinguish left and right")
231
+
232
  with gr.Row():
233
  with gr.Column():
234
  prompt_input = gr.Textbox(label="Prompt")
 
276
  cache_examples=True,
277
  )
278
 
279
+ inputs = [prompt_input, negative_prompt, input_image, model_type]
280
+ submit_btn.click(fn=infer, inputs=inputs, outputs=[output_image])
281
 
282
  gr.Markdown("""
283
  <center><h1>Summary</h1></center>
 
288
  <br>
289
  To preprocess the data there were three options we considered:
290
  <br>
291
+ > * The first was to use Mediapipes built-in draw landmarks function. This was an obvious first choice however we noticed with low training steps that the model couldn't easily distinguish handedness and would often generate the wrong hand for the conditioning image.<center>
292
  <br>
293
  <table><tr>
294
  <td>
 
308
  </tr></table>
309
  </center>
310
  <br>
311
+ > * To counter this issue we changed the palm landmark colors with the intention to keep the color similar in order to learn that they provide similar information, but different to make the model know which hands were left or right.<center>
312
  <br>
313
  <table><tr>
314
  <td>
 
328
  </tr></table>
329
  </center>
330
  <br>
331
+ > * The last option was to use [MediaPipe Holistic](https://ai.googleblog.com/2020/12/mediapipe-holistic-simultaneous-face.html) to provide pose face and hand landmarks to the ControlNet. This method was promising in theory, however, the HaGRID dataset was not suitable for this method as the Holistic model performs poorly with partial body and obscurely cropped images.
332
  <br>
333
  We anecdotally determined that when trained at lower steps the encoded hand model performed better than the standard MediaPipe model due to implied handedness. We theorize that with a larger dataset of more full-body hand and pose classifications, Holistic landmarks will provide the best images in the future however for the moment the hand encoded model performs best. """)
334
 
 
 
 
 
 
 
 
 
 
 
 
335
 
336
  demo.launch()