diffusers/examples/inference
Anton Lozhkov efa773afd2
Support K-LMS in img2img (#270)
* Support K-LMS in img2img

* Apply review suggestions
2022-08-29 17:17:05 +02:00
..
image_to_image.py Support K-LMS in img2img (#270) 2022-08-29 17:17:05 +02:00
inpainting.py Fix inpainting script (#258) 2022-08-26 21:16:43 +05:30
readme.md Adds missing torch imports to inpainting and image_to_image example (#265) 2022-08-29 10:56:37 +02:00

readme.md

Inference Examples

Installing the dependencies

Before running the scripts, make sure to install the library's dependencies:

pip install diffusers transformers ftfy

Image-to-Image text-guided generation with Stable Diffusion

The image_to_image.py script implements StableDiffusionImg2ImgPipeline. It lets you pass a text prompt and an initial image to condition the generation of new images. This example also showcases how you can write custom diffusion pipelines using diffusers!

How to use it

import torch
from torch import autocast
import requests
from PIL import Image
from io import BytesIO

from image_to_image import StableDiffusionImg2ImgPipeline, preprocess

# load the pipeline
device = "cuda"
pipe = StableDiffusionImg2ImgPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4",
    revision="fp16", 
    torch_dtype=torch.float16,
    use_auth_token=True
).to(device)

# let's download an initial image
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"

response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
init_image = init_image.resize((768, 512))
init_image = preprocess(init_image)

prompt = "A fantasy landscape, trending on artstation"

with autocast("cuda"):
    images = pipe(prompt=prompt, init_image=init_image, strength=0.75, guidance_scale=7.5)["sample"]

images[0].save("fantasy_landscape.png")

You can also run this example on colab Open In Colab

Tweak prompts reusing seeds and latents

You can generate your own latents to reproduce results, or tweak your prompt on a specific result you liked. This notebook shows how to do it step by step. You can also run it in Google Colab Open In Colab.

In-painting using Stable Diffusion

The inpainting.py script implements StableDiffusionInpaintingPipeline. This script lets you edit specific parts of an image by providing a mask and text prompt.

How to use it

import torch
from io import BytesIO

from torch import autocast
import requests
import PIL

from inpainting import StableDiffusionInpaintingPipeline

def download_image(url):
    response = requests.get(url)
    return PIL.Image.open(BytesIO(response.content)).convert("RGB")

img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"

init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))

device = "cuda"
pipe = StableDiffusionInpaintingPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4",
    revision="fp16", 
    torch_dtype=torch.float16,
    use_auth_token=True
).to(device)

prompt = "a cat sitting on a bench"
with autocast("cuda"):
    images = pipe(prompt=prompt, init_image=init_image, mask_image=mask_image, strength=0.75)["sample"]

images[0].save("cat_on_bench.png")

You can also run this example on colab Open In Colab