File size: 1,731 Bytes
e570034 4a3ea50 459a418 95a3f57 459a418 f04b59f d25f40b e570034 f969fa1 e570034 f969fa1 e570034 f969fa1 459a418 f969fa1 6b2fefa 7dcdeae 459a418 e570034 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
---
license: apache-2.0
---
Diffusers formation for mochi-1-preview model.
It was create by scripts: https://github.com/huggingface/diffusers/blob/main/scripts/convert_mochi_to_diffusers.py
The model can be directly load from pretrained with mochi branch: https://github.com/huggingface/diffusers/tree/mochi-t2v
You can directly use the zipped file branch in: https://huggingface.co/feizhengcong/mochi-1-preview-diffusers/blob/main/diffusers-mochi.zip
```bash
from diffusers import MochiPipeline
from diffusers.utils import export_to_video
pipe = MochiPipeline.from_pretrained(model_path, torch_dtype=torch.bfloat16)
pipe.to("cuda")
prompt = "Close-up of a chameleon's eye, with its scaly skin changing color. Ultra high resolution 4k."
frames = pipe(prompt,
num_inference_steps=50,
guidance_scale=4.5,
num_frames=61,
generator=torch.Generator(device="cuda").manual_seed(42),
).frames[0]
export_to_video(frames, "mochi.mp4")
```
Some generated results:
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/62e34a12c9bece303d146af8/Cm3I6kidy2YP5nu3un7XP.mp4"></video>
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/62e34a12c9bece303d146af8/bVNem7sGTvBEjxQG7MHw_.mp4"></video>
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/62e34a12c9bece303d146af8/0YWv2KJwH_UB2WkWjO_bP.mp4"></video>
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/62e34a12c9bece303d146af8/TOLKdMx-kFLCNXD6nIVHm.mp4"></video>
Pretty thanks for the discussion in https://github.com/huggingface/diffusers/pull/9769
11.04 updation for vae encoder releasing.
---
license: apache-2.0
--- |