update paint by example docs (#2598)
This commit is contained in:
parent
c812d97d5b
commit
7fe638c502
|
@ -136,7 +136,7 @@ def prepare_mask_and_masked_image(image, mask):
|
||||||
|
|
||||||
class PaintByExamplePipeline(DiffusionPipeline):
|
class PaintByExamplePipeline(DiffusionPipeline):
|
||||||
r"""
|
r"""
|
||||||
Pipeline for text-guided image inpainting using Stable Diffusion. *This is an experimental feature*.
|
Pipeline for image-guided image inpainting using Stable Diffusion. *This is an experimental feature*.
|
||||||
|
|
||||||
This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
|
This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
|
||||||
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
|
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
|
||||||
|
@ -144,10 +144,8 @@ class PaintByExamplePipeline(DiffusionPipeline):
|
||||||
Args:
|
Args:
|
||||||
vae ([`AutoencoderKL`]):
|
vae ([`AutoencoderKL`]):
|
||||||
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
|
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
|
||||||
text_encoder ([`CLIPTextModel`]):
|
image_encoder ([`PaintByExampleImageEncoder`]):
|
||||||
Frozen text-encoder. Stable Diffusion uses the text portion of
|
Encodes the example input image. The unet is conditioned on the example image instead of a text prompt.
|
||||||
[CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
|
|
||||||
the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
|
|
||||||
tokenizer (`CLIPTokenizer`):
|
tokenizer (`CLIPTokenizer`):
|
||||||
Tokenizer of class
|
Tokenizer of class
|
||||||
[CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
|
[CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
|
||||||
|
|
Loading…
Reference in New Issue