File size: 2,384 Bytes
5e01f77
 
 
 
 
 
 
 
 
 
 
 
 
cbc2209
 
 
 
 
 
 
5e01f77
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
base_model: genmo/mochi-1-preview
library_name: diffusers
license: apache-2.0
widget: []
tags:
- text-to-video
- diffusers-training
- diffusers
- lora
- mochi-1-preview
- mochi-1-preview-diffusers
- template:sd-lora
- text-to-video
- diffusers-training
- diffusers
- lora
- mochi-1-preview
- mochi-1-preview-diffusers
- template:sd-lora
---

<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->


# Mochi-1 Preview LoRA Finetune

<Gallery />

## Model description

This is a lora finetune of the Mochi-1 preview model `genmo/mochi-1-preview`.

The model was trained using [CogVideoX Factory](https://github.com/a-r-r-o-w/cogvideox-factory) - a repository containing memory-optimized training scripts for the CogVideoX and Mochi family of models using [TorchAO](https://github.com/pytorch/ao) and [DeepSpeed](https://github.com/microsoft/DeepSpeed). The scripts were adopted from [CogVideoX Diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/cogvideo/train_cogvideox_lora.py).

## Download model

[Download LoRA](soumildatta/mochi-lora/tree/main) in the Files & Versions tab.

## Usage

Requires the [🧨 Diffusers library](https://github.com/huggingface/diffusers) installed.

```py
from diffusers import MochiPipeline
from diffusers.utils import export_to_video
import torch 

pipe = MochiPipeline.from_pretrained("genmo/mochi-1-preview")
pipe.load_lora_weights("CHANGE_ME")
pipe.enable_model_cpu_offload()

with torch.autocast("cuda", torch.bfloat16):
    video = pipe(
        prompt="CHANGE_ME",
        guidance_scale=6.0,
        num_inference_steps=64,
        height=480,
        width=848,
        max_sequence_length=256,
        output_type="np"
    ).frames[0]
export_to_video(video)
```

For more details, including weighting, merging and fusing LoRAs, check the [documentation](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) on loading LoRAs in diffusers.



## Intended uses & limitations

#### How to use

```python
# TODO: add an example code snippet for running this diffusion pipeline
```

#### Limitations and bias

[TODO: provide examples of latent issues and potential remediations]

## Training details

[TODO: describe the data used to train the model]