|
# Genshin_Impact_KAEDEHARA_KAZUHA HunyuanVideo LoRA |
|
|
|
This repository contains the necessary setup and scripts to generate videos using the HunyuanVideo model with a LoRA (Low-Rank Adaptation) fine-tuned for KAEDEHARA_KAZUHA. Below are the instructions to install dependencies, download models, and run the demo. |
|
|
|
--- |
|
|
|
## Installation |
|
|
|
### Step 1: Install System Dependencies |
|
Run the following command to install required system packages: |
|
```bash |
|
sudo apt-get update && sudo apt-get install git-lfs ffmpeg cbm |
|
``` |
|
|
|
### Step 2: Clone the Repository |
|
Clone the repository and navigate to the project directory: |
|
```bash |
|
git clone https://huggingface.co/svjack/Genshin_Impact_KAEDEHARA_KAZUHA_HunyuanVideo_lora |
|
cd Genshin_Impact_KAEDEHARA_KAZUHA_HunyuanVideo_lora |
|
``` |
|
|
|
### Step 3: Install Python Dependencies |
|
Install the required Python packages: |
|
```bash |
|
conda create -n py310 python=3.10 |
|
conda activate py310 |
|
pip install ipykernel |
|
python -m ipykernel install --user --name py310 --display-name "py310" |
|
|
|
pip install -r requirements.txt |
|
pip install ascii-magic matplotlib tensorboard huggingface_hub |
|
pip install moviepy==1.0.3 |
|
pip install sageattention==1.0.6 |
|
|
|
pip install torch==2.5.0 torchvision |
|
``` |
|
|
|
--- |
|
|
|
## Download Models |
|
|
|
### Step 1: Download HunyuanVideo Model |
|
Download the HunyuanVideo model and place it in the `ckpts` directory: |
|
```bash |
|
huggingface-cli download tencent/HunyuanVideo --local-dir ./ckpts |
|
``` |
|
|
|
### Step 2: Download LLaVA Model |
|
Download the LLaVA model and preprocess it: |
|
```bash |
|
cd ckpts |
|
huggingface-cli download xtuner/llava-llama-3-8b-v1_1-transformers --local-dir ./llava-llama-3-8b-v1_1-transformers |
|
wget https://raw.githubusercontent.com/Tencent/HunyuanVideo/refs/heads/main/hyvideo/utils/preprocess_text_encoder_tokenizer_utils.py |
|
python preprocess_text_encoder_tokenizer_utils.py --input_dir llava-llama-3-8b-v1_1-transformers --output_dir text_encoder |
|
``` |
|
|
|
### Step 3: Download CLIP Model |
|
Download the CLIP model for the text encoder: |
|
```bash |
|
huggingface-cli download openai/clip-vit-large-patch14 --local-dir ./text_encoder_2 |
|
``` |
|
|
|
--- |
|
|
|
## Demo |
|
|
|
### Generate Video 1: KAEDEHARA_KAZUHA |
|
Run the following command to generate a video of KAEDEHARA_KAZUHA: |
|
```bash |
|
python hv_generate_video.py \ |
|
--fp8 \ |
|
--video_size 544 960 \ |
|
--video_length 60 \ |
|
--infer_steps 30 \ |
|
--prompt "This is a digital anime-style illustration featuring KAEDEHARA KAZUHA, a character with long, flowing white hair with red streaks, and red eyes, leaning on a wooden table in a cozy, warmly-lit café. She wears a black and orange outfit with a red scarf. In the background, there are shelves with various items and soft lighting. On the table, there is a glass of orange juice. The atmosphere is calm and inviting." \ |
|
--save_path . \ |
|
--output_type both \ |
|
--dit ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt \ |
|
--attn_mode sdpa \ |
|
--vae ckpts/hunyuan-video-t2v-720p/vae/pytorch_model.pt \ |
|
--vae_chunk_size 32 \ |
|
--vae_spatial_tile_sample_min_size 128 \ |
|
--text_encoder1 ckpts/text_encoder \ |
|
--text_encoder2 ckpts/text_encoder_2 \ |
|
--seed 1234 \ |
|
--lora_multiplier 1.0 \ |
|
--lora_weight KAEDEHARA_KAZUHA_im_lora_dir/KAEDEHARA_KAZUHA_single_im_lora-000008.safetensors |
|
|
|
``` |
|
|
|
|
|
<video controls autoplay src="https://huggingface.co/svjack/Genshin_Impact_KAEDEHARA_KAZUHA_HunyuanVideo_lora/resolve/main/20250203-152222_1234.mp4"></video> |
|
|
|
|
|
### Generate Video 2: KAEDEHARA_KAZUHA Letter |
|
Run the following command to generate a video of KAEDEHARA_KAZUHA: |
|
```bash |
|
python hv_generate_video.py \ |
|
--fp8 \ |
|
--video_size 544 960 \ |
|
--video_length 60 \ |
|
--infer_steps 30 \ |
|
--prompt "In this digital anime-style artwork, KAEDEHARA KAZUHA, a young man with silver hair and red eyes, is holding a white envelope with Japanese text. He wears a white T-shirt and a red wristband. The background shows a warmly lit room with wooden furniture, a vase of autumn leaves, and a window casting soft light. The atmosphere is cozy and inviting." \ |
|
--save_path . \ |
|
--output_type both \ |
|
--dit ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt \ |
|
--attn_mode sdpa \ |
|
--vae ckpts/hunyuan-video-t2v-720p/vae/pytorch_model.pt \ |
|
--vae_chunk_size 32 \ |
|
--vae_spatial_tile_sample_min_size 128 \ |
|
--text_encoder1 ckpts/text_encoder \ |
|
--text_encoder2 ckpts/text_encoder_2 \ |
|
--seed 1234 \ |
|
--lora_multiplier 1.0 \ |
|
--lora_weight KAEDEHARA_KAZUHA_im_lora_dir/KAEDEHARA_KAZUHA_single_im_lora-000008.safetensors |
|
``` |
|
|
|
|
|
<video controls autoplay src="https://huggingface.co/svjack/Genshin_Impact_KAEDEHARA_KAZUHA_HunyuanVideo_lora/resolve/main/20250203-153526_1234.mp4"></video> |
|
|
|
|
|
--- |
|
|
|
## Notes |
|
- Ensure you have sufficient GPU resources for video generation. |
|
- Adjust the `--video_size`, `--video_length`, and `--infer_steps` parameters as needed for different output qualities and lengths. |
|
- The `--prompt` parameter can be modified to generate videos with different scenes or actions. |
|
|
|
--- |