The SD Turbo model is converted to OpenVINO for fast inference on CPU. This model is intended for research purpose only.

Original Model : sd-turbo

You can use this model with FastSD CPU.

Sample

To run the model yourself, you can leverage the 🧨 Diffusers library:

  1. Install the dependencies:
pip install optimum-intel openvino diffusers onnx
  1. Run the model:
from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline

pipeline = OVStableDiffusionPipeline.from_pretrained(
    "rupeshs/sd-turbo-openvino",
    ov_config={"CACHE_DIR": ""},
)
prompt = "a cat wearing santa claus dress,portrait"

images = pipeline(
    prompt=prompt,
    width=512,
    height=512,
    num_inference_steps=1,
    guidance_scale=1.0,
).images
images[0].save("out_image.png")

License

The SD Turbo Model is licensed under the Stability AI Non-Commercial Research Community License, Copyright (c) Stability AI Ltd. All Rights Reserved.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Spaces using rupeshs/sd-turbo-openvino 5