63 lines
3.6 KiB
Plaintext
63 lines
3.6 KiB
Plaintext
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
-->
|
|
|
|
# Text-Guided Image-Inpainting
|
|
|
|
The [`StableDiffusionInpaintPipeline`] lets you edit specific parts of an image by providing a mask and a text prompt. It uses a version of Stable Diffusion specifically trained for in-painting tasks.
|
|
|
|
<Tip warning={true}>
|
|
Note that this model is distributed separately from the regular Stable Diffusion model, so you have to accept its license even if you accepted the Stable Diffusion one in the past.
|
|
|
|
Please, visit the [model card](https://huggingface.co/runwayml/stable-diffusion-inpainting), read the license carefully and tick the checkbox if you agree. You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section](https://huggingface.co/docs/hub/security-tokens) of the documentation.
|
|
</Tip>
|
|
|
|
```python
|
|
import PIL
|
|
import requests
|
|
import torch
|
|
from io import BytesIO
|
|
|
|
from diffusers import StableDiffusionInpaintPipeline
|
|
|
|
|
|
def download_image(url):
|
|
response = requests.get(url)
|
|
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
|
|
|
|
|
|
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
|
|
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
|
|
|
|
init_image = download_image(img_url).resize((512, 512))
|
|
mask_image = download_image(mask_url).resize((512, 512))
|
|
|
|
pipe = StableDiffusionInpaintPipeline.from_pretrained(
|
|
"runwayml/stable-diffusion-inpainting",
|
|
revision="fp16",
|
|
torch_dtype=torch.float16,
|
|
)
|
|
pipe = pipe.to("cuda")
|
|
|
|
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
|
|
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
|
|
```
|
|
|
|
`image` | `mask_image` | `prompt` | **Output** |
|
|
:-------------------------:|:-------------------------:|:-------------------------:|-------------------------:|
|
|
<img src="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" alt="drawing" width="250"/> | <img src="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" alt="drawing" width="250"/> | ***Face of a yellow cat, high resolution, sitting on a park bench*** | <img src="https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/test.png" alt="drawing" width="250"/> |
|
|
|
|
|
|
You can also run this example on colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)
|
|
|
|
<Tip warning={true}>
|
|
A previous experimental implementation of in-painting used a different, lower-quality process. To ensure backwards compatibility, loading a pretrained pipeline that doesn't contain the new model will still apply the old in-painting method.
|
|
</Tip> |