[Docs] Pipelines for inference (#417)

* Update conditional_image_generation.mdx

* Update unconditional_image_generation.mdx
This commit is contained in:
Satpal Singh Rathore 2022-09-08 16:12:13 +05:30 committed by GitHub
parent a353c46ec0
commit 6b9906f6c2
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 54 additions and 16 deletions

View File

@ -12,21 +12,39 @@ specific language governing permissions and limitations under the License.
# Quicktour # Conditional Image Generation
Start using Diffusers🧨 quickly! The [`DiffusionPipeline`] is the easiest way to use a pre-trained diffusion system for inference
To start, use the [`DiffusionPipeline`] for quick inference and sample generations!
Start by creating an instance of [`DiffusionPipeline`] and specify which pipeline checkpoint you would like to download.
You can use the [`DiffusionPipeline`] for any [Diffusers' checkpoint](https://huggingface.co/models?library=diffusers&sort=downloads).
In this guide though, you'll use [`DiffusionPipeline`] for text-to-image generation with [Latent Diffusion](https://huggingface.co/CompVis/ldm-text2im-large-256):
```python
>>> from diffusers import DiffusionPipeline
>>> generator = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256")
``` ```
pip install diffusers The [`DiffusionPipeline`] downloads and caches all modeling, tokenization, and scheduling components.
Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU.
You can move the generator object to GPU, just like you would in PyTorch.
```python
>>> generator.to("cuda")
``` ```
## Main classes Now you can use the `generator` on your text prompt:
### Models ```python
>>> image = generator("An image of a squirrel in Picasso style").images[0]
```
### Schedulers The output is by default wrapped into a [PIL Image object](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class).
### Pipeliens You can save the image by simply calling:
```python
>>> image.save("image_of_squirrel_painting.png")
```

View File

@ -12,21 +12,41 @@ specific language governing permissions and limitations under the License.
# Quicktour # Unonditional Image Generation
Start using Diffusers🧨 quickly! The [`DiffusionPipeline`] is the easiest way to use a pre-trained diffusion system for inference
To start, use the [`DiffusionPipeline`] for quick inference and sample generations!
Start by creating an instance of [`DiffusionPipeline`] and specify which pipeline checkpoint you would like to download.
You can use the [`DiffusionPipeline`] for any [Diffusers' checkpoint](https://huggingface.co/models?library=diffusers&sort=downloads).
In this guide though, you'll use [`DiffusionPipeline`] for unconditional image generation with [DDPM](https://arxiv.org/abs/2006.11239):
```python
>>> from diffusers import DiffusionPipeline
>>> generator = DiffusionPipeline.from_pretrained("google/ddpm-celebahq-256")
``` ```
pip install diffusers The [`DiffusionPipeline`] downloads and caches all modeling, tokenization, and scheduling components.
Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU.
You can move the generator object to GPU, just like you would in PyTorch.
```python
>>> generator.to("cuda")
``` ```
## Main classes Now you can use the `generator` on your text prompt:
### Models ```python
>>> image = generator().images[0]
```
The output is by default wrapped into a [PIL Image object](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class).
You can save the image by simply calling:
```python
>>> image.save("generated_image.png")
```
### Schedulers
### Pipeliens