imagenet-1k-latent / README.md
Forbu14's picture
Update README.md
165ed60 verified
|
raw
history blame
1.95 kB
metadata
dataset_info:
  features:
    - name: latents
      sequence:
        sequence:
          sequence: float32
    - name: label_latent
      dtype: int64
  splits:
    - name: train
      num_bytes: 21682470308
      num_examples: 1281167
    - name: validation
      num_bytes: 846200000
      num_examples: 50000
    - name: test
      num_bytes: 1692400000
      num_examples: 100000
  download_size: 24417155228
  dataset_size: 24221070308
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*

Better latent: I advise you to use another dataset https://huggingface.co/datasets/cloneofsimo/imagenet.int8 which is already compressed (5Go only)

This dataset is the latent representation of the imagenet dataset using the stability VAE stabilityai/sd-vae-ft-ema.

Every image_latent is of shape (4, 32, 32).

If you want to retrieve the original image you have to use the model used to create the latent image :

vae_model = "stabilityai/sd-vae-ft-ema"
vae = AutoencoderKL.from_pretrained(vae_model)
vae.eval()

The images have been encoded using :

images = [DEFAULT_TRANSFORM(image.convert("RGB")) for image in examples["image"]]
images = torch.stack(images)
images = vaeprocess.preprocess(images)
images = images.to(device="cuda", dtype=torch.float)
with torch.no_grad():
    latents = vae.encode(images).latent_dist.sample()

With DEFAULT_TRANSFORM being :

DEFAULT_IMAGE_SIZE = 256

DEFAULT_TRANSFORM = transforms.Compose(
    [
        transforms.Resize((DEFAULT_IMAGE_SIZE, DEFAULT_IMAGE_SIZE)),
        transforms.ToTensor(),
    ]
)

The images can be decoded using :

import datasets

latent_dataset = datasets.load_dataset(
            "Forbu14/imagenet-1k-latent"
        )

latent = torch.tensor(latent_dataset["train"][0]["latents"])
image = vae.decode(latent).sample