diffusers/README.md

136 lines
6.6 KiB
Markdown
Raw Normal View History

2022-06-15 03:50:41 -06:00
<p align="center">
<br>
2022-06-15 04:04:28 -06:00
<img src="docs/source/imgs/diffusers_library.jpg" width="400"/>
2022-06-15 03:50:41 -06:00
<br>
<p>
<p align="center">
2022-06-15 04:04:28 -06:00
<a href="https://github.com/huggingface/diffusers/blob/main/LICENSE">
2022-06-15 03:50:41 -06:00
<img alt="GitHub" src="https://img.shields.io/github/license/huggingface/datasets.svg?color=blue">
</a>
<a href="https://github.com/huggingface/diffusers/releases">
2022-06-15 04:04:28 -06:00
<img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/diffusers.svg">
2022-06-15 03:50:41 -06:00
</a>
<a href="CODE_OF_CONDUCT.md">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-2.0-4baaaa.svg">
</a>
</p>
🤗 Diffusers provides pretrained diffusion models across multiple modalities, such as vision and audio, and serves
as a modular toolbox for inference and training of diffusion models.
More precisely, 🤗 Diffusers offers:
- State-of-the-art diffusion pipelines that can be run in inference with just a couple of lines of code (see [src/diffusers/pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines)).
- Various noise schedulers that can be used interchangeably for the prefered speed vs. quality trade-off in inference (see [src/diffusers/schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers)).
2022-06-15 03:58:36 -06:00
- Multiple types of models, such as UNet, that can be used as building blocks in an end-to-end diffusion system (see [src/diffusers/models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models)).
2022-06-15 03:54:38 -06:00
- Training examples to show how to train the most popular diffusion models (see [examples](https://github.com/huggingface/diffusers/tree/main/examples)).
2022-06-01 16:42:08 -06:00
2022-06-02 04:27:01 -06:00
## Definitions
2022-06-02 04:15:59 -06:00
2022-06-16 09:49:35 -06:00
**Models**: Neural network that models $p_\theta(\mathbf{x}_{t-1}|\mathbf{x}_t)$ (see image below) and is trained end-to-end to *denoise* a noisy input to an image.
2022-06-15 03:50:41 -06:00
*Examples*: UNet, Conditioned UNet, 3D UNet, Transformer UNet
2022-06-02 04:27:01 -06:00
2022-06-17 11:40:20 -06:00
<p align="center">
<img src="https://user-images.githubusercontent.com/10695622/174349667-04e9e485-793b-429a-affe-096e8199ad5b.png" width="800"/>
<br>
<em> Figure from DDPM paper (https://arxiv.org/abs/2006.11239). </em>
<p>
2022-06-15 03:50:41 -06:00
**Schedulers**: Algorithm class for both **inference** and **training**.
The class provides functionality to compute previous image according to alpha, beta schedule as well as predict noise for training.
*Examples*: [DDPM](https://arxiv.org/abs/2006.11239), [DDIM](https://arxiv.org/abs/2010.02502), [PNDM](https://arxiv.org/abs/2202.09778), [DEIS](https://arxiv.org/abs/2204.13902)
2022-06-02 04:27:01 -06:00
2022-06-17 11:40:20 -06:00
<p align="center">
<img src="https://user-images.githubusercontent.com/10695622/174349706-53d58acc-a4d1-4cda-b3e8-432d9dc7ad38.png" width="800"/>
<br>
<em> Sampling and training algorithms. Figure from DDPM paper (https://arxiv.org/abs/2006.11239). </em>
<p>
2022-06-02 04:27:01 -06:00
2022-06-15 03:50:41 -06:00
**Diffusion Pipeline**: End-to-end pipeline that includes multiple diffusion models, possible text encoders, ...
2022-06-22 06:38:36 -06:00
*Examples*: Glide, Latent-Diffusion, Imagen, DALL-E 2
2022-06-02 04:27:01 -06:00
2022-06-17 11:40:20 -06:00
<p align="center">
<img src="https://user-images.githubusercontent.com/10695622/174348898-481bd7c2-5457-4830-89bc-f0907756f64c.jpeg" width="550"/>
<br>
<em> Figure from ImageGen (https://imagen.research.google/). </em>
<p>
2022-06-15 03:50:41 -06:00
## Philosophy
2022-06-15 18:23:27 -06:00
- Readability and clarity is prefered over highly optimized code. A strong importance is put on providing readable, intuitive and elementary code design. *E.g.*, the provided [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) are separated from the provided [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models) and provide well-commented code that can be read alongside the original paper.
2022-06-15 03:50:41 -06:00
- Diffusers is **modality independent** and focusses on providing pretrained models and tools to build systems that generate **continous outputs**, *e.g.* vision and audio.
- Diffusion models and schedulers are provided as consise, elementary building blocks whereas diffusion pipelines are a collection of end-to-end diffusion systems that can be used out-of-the-box, should stay as close as possible to their original implementation and can include components of other library, such as text-encoders. Examples for diffusion pipelines are [Glide](https://github.com/openai/glide-text2im) and [Latent Diffusion](https://github.com/CompVis/latent-diffusion).
2022-06-10 06:36:10 -06:00
## Quickstart
2022-07-15 09:28:04 -06:00
**Check out this notebook: https://colab.research.google.com/drive/1nMfF04cIxg6FujxsNYi9kiTRrzj4_eZU?usp=sharing**
2022-06-15 04:41:57 -06:00
### Installation
2022-06-10 06:37:58 -06:00
```
2022-06-15 08:28:58 -06:00
pip install diffusers # should install diffusers 0.0.4
2022-06-10 06:38:53 -06:00
```
2022-06-10 06:37:58 -06:00
2022-06-16 09:45:31 -06:00
### 1. `diffusers` as a toolbox for schedulers and models
2022-06-02 07:59:58 -06:00
2022-06-07 09:04:32 -06:00
`diffusers` is more modularized than `transformers`. The idea is that researchers and engineers can use only parts of the library easily for the own use cases.
It could become a central place for all kinds of models, schedulers, training utils and processors that one can mix and match for one's own use case.
2022-06-10 06:32:42 -06:00
Both models and schedulers should be load- and saveable from the Hub.
2022-06-02 07:59:58 -06:00
2022-06-15 04:25:48 -06:00
For more examples see [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) and [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models)
2022-07-15 07:06:38 -06:00
#### **Example for Unconditonal Image generation [DDPM](https://arxiv.org/abs/2006.11239):**
2022-06-02 07:59:58 -06:00
```python
import torch
2022-07-15 08:06:45 -06:00
from diffusers import UNetUnconditionalModel, DDIMScheduler
import PIL.Image
2022-06-06 09:43:36 -06:00
import numpy as np
2022-06-10 06:37:58 -06:00
import tqdm
2022-06-06 09:43:36 -06:00
2022-06-07 07:13:39 -06:00
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
2022-06-06 09:43:36 -06:00
# 1. Load models
2022-07-15 08:06:45 -06:00
scheduler = DDIMScheduler.from_config("fusing/ddpm-celeba-hq", tensor_format="pt")
unet = UNetUnconditionalModel.from_pretrained("fusing/ddpm-celeba-hq", ddpm=True).to(torch_device)
2022-06-06 09:43:36 -06:00
# 2. Sample gaussian noise
2022-07-15 08:06:45 -06:00
generator = torch.manual_seed(23)
unet.image_size = unet.resolution
2022-06-12 16:14:03 -06:00
image = torch.randn(
2022-07-15 08:06:45 -06:00
(1, unet.in_channels, unet.image_size, unet.image_size),
2022-06-16 02:22:55 -06:00
generator=generator,
2022-06-12 16:14:03 -06:00
)
image = image.to(torch_device)
2022-06-10 06:32:42 -06:00
2022-07-15 08:06:45 -06:00
# 3. Denoise
2022-06-10 06:32:42 -06:00
num_inference_steps = 50
eta = 0.0 # <- deterministic sampling
2022-07-15 08:06:45 -06:00
scheduler.set_timesteps(num_inference_steps)
2022-06-10 06:32:42 -06:00
2022-07-15 08:06:45 -06:00
for t in tqdm.tqdm(scheduler.timesteps):
2022-06-16 09:49:35 -06:00
# 1. predict noise residual
2022-06-28 17:05:08 -06:00
with torch.no_grad():
2022-07-15 08:06:45 -06:00
residual = unet(image, t)["sample"]
2022-06-16 09:49:35 -06:00
2022-07-15 08:06:45 -06:00
prev_image = scheduler.step(residual, t, image, eta)["prev_sample"]
2022-06-16 09:49:35 -06:00
2022-07-15 08:06:45 -06:00
# 3. set current image to prev_image: x_t -> x_t-1
image = prev_image
2022-06-16 09:49:35 -06:00
2022-07-15 08:06:45 -06:00
# 4. process image to PIL
2022-06-06 09:43:36 -06:00
image_processed = image.cpu().permute(0, 2, 3, 1)
image_processed = (image_processed + 1.0) * 127.5
image_processed = image_processed.numpy().astype(np.uint8)
image_pil = PIL.Image.fromarray(image_processed[0])
2022-06-06 10:19:02 -06:00
2022-07-15 08:06:45 -06:00
# 5. save image
image_pil.save("generated_image.png")
```
#### **Example for Unconditonal Image generation [LDM](https://github.com/CompVis/latent-diffusion):**
```python
2022-06-02 07:59:58 -06:00
```