|
# Intelligent Grimm - Open-ended Visual Storytelling via Latent Diffusion Models (CVPR 2024) |
|
|
|
This is the StorySalon dataset proposed in StoryGen. |
|
|
|
For the open-source PDF data, you can directly download the frames, corresponding masks, descriptions and original story narratives. |
|
For the data extracted from YouTube videos, we also provide their corresponding masks, descriptions and original story narratives in this repostiroy. However, you need to refer to `./Image_Inpainted/Video/metadata.json` to download the video meta-data by yourself, and then use the provided data processing pipeline to obtain the frames. |
|
|
|
## Video Meta Data Preparation |
|
We provide the metadata of our StorySalon dataset in `./Image_Inpainted/Video/metadata.json`. It includes the id, name, url, duration and the keyframe list after filtering of the videos. |
|
|
|
To download these videos, we recommend to use [youtube-dl](https://github.com/yt-dlp/yt-dlp) via: |
|
``` |
|
youtube-dl --write-auto-sub -o 'file\%(title)s.%(ext)s' -f 135 [url] |
|
``` |
|
|
|
The keyframes extracted with the following data processing pipeline (step 1) can be filtered according to the keyframe list provided in the metadata to avoid manually selection. |
|
|
|
The corresponding masks, story-level description and visual description can be extracted with the following data processing pipeline or downloaded from [here](https://huggingface.co/datasets/haoningwu/StorySalon). |
|
|
|
|
|
## Data Processing Pipeline |
|
The data processing pipeline includes several necessary steps: |
|
- Extract the keyframes and their corresponding subtitles; |
|
- Detect and remove duplicate frames; |
|
- Segment text, people, and headshots in images; and remove frames that only contain real people; |
|
- Inpaint the text, headshots and real hands in the frames according to the segmentation mask; |
|
- (Optional) Use Caption model combined with subtitles to generate a description of each image. |
|
|
|
For a more detailed introduction to the data processing pipeline, please refer to [StoryGen](https://github.com/haoningwu3639/StoryGen) and our paper. |
|
|
|
## Citation |
|
If you use this dataset for your research or project, please cite: |
|
|
|
@inproceedings{liu2024intelligent, |
|
title = {Intelligent Grimm -- Open-ended Visual Storytelling via Latent Diffusion Models}, |
|
author = {Chang Liu, Haoning Wu, Yujie Zhong, Xiaoyun Zhang, Yanfeng Wang, Weidi Xie}, |
|
booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, |
|
year = {2024}, |
|
} |
|
|
|
## Contact |
|
If you have any question, please feel free to contact [email protected] or [email protected]. |
|
|