2022-06-15 03:50:41 -06:00
< p align = "center" >
< br >
2022-06-15 04:04:28 -06:00
< img src = "docs/source/imgs/diffusers_library.jpg" width = "400" / >
2022-06-15 03:50:41 -06:00
< br >
< p >
< p align = "center" >
2022-06-15 04:04:28 -06:00
< a href = "https://github.com/huggingface/diffusers/blob/main/LICENSE" >
2022-06-15 03:50:41 -06:00
< img alt = "GitHub" src = "https://img.shields.io/github/license/huggingface/datasets.svg?color=blue" >
< / a >
< a href = "https://github.com/huggingface/diffusers/releases" >
2022-06-15 04:04:28 -06:00
< img alt = "GitHub release" src = "https://img.shields.io/github/release/huggingface/diffusers.svg" >
2022-06-15 03:50:41 -06:00
< / a >
< a href = "CODE_OF_CONDUCT.md" >
< img alt = "Contributor Covenant" src = "https://img.shields.io/badge/Contributor%20Covenant-2.0-4baaaa.svg" >
< / a >
< / p >
🤗 Diffusers provides pretrained diffusion models across multiple modalities, such as vision and audio, and serves
as a modular toolbox for inference and training of diffusion models.
More precisely, 🤗 Diffusers offers:
- State-of-the-art diffusion pipelines that can be run in inference with just a couple of lines of code (see [src/diffusers/pipelines ](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines )).
- Various noise schedulers that can be used interchangeably for the prefered speed vs. quality trade-off in inference (see [src/diffusers/schedulers ](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers )).
2022-06-15 03:58:36 -06:00
- Multiple types of models, such as UNet, that can be used as building blocks in an end-to-end diffusion system (see [src/diffusers/models ](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models )).
2022-06-15 03:54:38 -06:00
- Training examples to show how to train the most popular diffusion models (see [examples ](https://github.com/huggingface/diffusers/tree/main/examples )).
2022-06-01 16:42:08 -06:00
2022-06-02 04:27:01 -06:00
## Definitions
2022-06-02 04:15:59 -06:00
2022-06-16 09:49:35 -06:00
**Models**: Neural network that models $p_\theta(\mathbf{x}_{t-1}|\mathbf{x}_t)$ (see image below) and is trained end-to-end to *denoise* a noisy input to an image.
2022-06-15 03:50:41 -06:00
*Examples*: UNet, Conditioned UNet, 3D UNet, Transformer UNet
2022-06-02 04:27:01 -06:00
2022-06-17 11:40:20 -06:00
< p align = "center" >
< img src = "https://user-images.githubusercontent.com/10695622/174349667-04e9e485-793b-429a-affe-096e8199ad5b.png" width = "800" / >
< br >
< em > Figure from DDPM paper (https://arxiv.org/abs/2006.11239). < / em >
< p >
2022-06-15 03:50:41 -06:00
**Schedulers**: Algorithm class for both **inference** and **training** .
The class provides functionality to compute previous image according to alpha, beta schedule as well as predict noise for training.
*Examples*: [DDPM ](https://arxiv.org/abs/2006.11239 ), [DDIM ](https://arxiv.org/abs/2010.02502 ), [PNDM ](https://arxiv.org/abs/2202.09778 ), [DEIS ](https://arxiv.org/abs/2204.13902 )
2022-06-02 04:27:01 -06:00
2022-06-17 11:40:20 -06:00
< p align = "center" >
< img src = "https://user-images.githubusercontent.com/10695622/174349706-53d58acc-a4d1-4cda-b3e8-432d9dc7ad38.png" width = "800" / >
< br >
< em > Sampling and training algorithms. Figure from DDPM paper (https://arxiv.org/abs/2006.11239). < / em >
< p >
2022-06-02 04:27:01 -06:00
2022-06-15 03:50:41 -06:00
**Diffusion Pipeline**: End-to-end pipeline that includes multiple diffusion models, possible text encoders, ...
2022-06-22 06:38:36 -06:00
*Examples*: Glide, Latent-Diffusion, Imagen, DALL-E 2
2022-06-02 04:27:01 -06:00
2022-06-17 11:40:20 -06:00
< p align = "center" >
< img src = "https://user-images.githubusercontent.com/10695622/174348898-481bd7c2-5457-4830-89bc-f0907756f64c.jpeg" width = "550" / >
< br >
< em > Figure from ImageGen (https://imagen.research.google/). < / em >
< p >
2022-06-15 03:50:41 -06:00
## Philosophy
2022-06-15 18:23:27 -06:00
- Readability and clarity is prefered over highly optimized code. A strong importance is put on providing readable, intuitive and elementary code design. *E.g.* , the provided [schedulers ](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers ) are separated from the provided [models ](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models ) and provide well-commented code that can be read alongside the original paper.
2022-06-15 03:50:41 -06:00
- Diffusers is **modality independent** and focusses on providing pretrained models and tools to build systems that generate **continous outputs** , *e.g.* vision and audio.
- Diffusion models and schedulers are provided as consise, elementary building blocks whereas diffusion pipelines are a collection of end-to-end diffusion systems that can be used out-of-the-box, should stay as close as possible to their original implementation and can include components of other library, such as text-encoders. Examples for diffusion pipelines are [Glide ](https://github.com/openai/glide-text2im ) and [Latent Diffusion ](https://github.com/CompVis/latent-diffusion ).
2022-06-10 06:36:10 -06:00
## Quickstart
2022-06-15 04:41:57 -06:00
### Installation
2022-06-10 06:37:58 -06:00
```
2022-06-15 08:28:58 -06:00
pip install diffusers # should install diffusers 0.0.4
2022-06-10 06:38:53 -06:00
```
2022-06-10 06:37:58 -06:00
2022-06-16 09:45:31 -06:00
### 1. `diffusers` as a toolbox for schedulers and models
2022-06-02 07:59:58 -06:00
2022-06-07 09:04:32 -06:00
`diffusers` is more modularized than `transformers` . The idea is that researchers and engineers can use only parts of the library easily for the own use cases.
It could become a central place for all kinds of models, schedulers, training utils and processors that one can mix and match for one's own use case.
2022-06-10 06:32:42 -06:00
Both models and schedulers should be load- and saveable from the Hub.
2022-06-02 07:59:58 -06:00
2022-06-15 04:25:48 -06:00
For more examples see [schedulers ](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers ) and [models ](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models )
2022-06-11 16:24:20 -06:00
#### **Example for [DDPM](https://arxiv.org/abs/2006.11239):**
2022-06-02 07:59:58 -06:00
```python
import torch
2022-06-13 02:39:53 -06:00
from diffusers import UNetModel, DDPMScheduler
2022-06-06 09:43:36 -06:00
import PIL
import numpy as np
2022-06-10 06:37:58 -06:00
import tqdm
2022-06-06 09:43:36 -06:00
2022-06-10 06:32:42 -06:00
generator = torch.manual_seed(0)
2022-06-07 07:13:39 -06:00
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
2022-06-06 09:43:36 -06:00
# 1. Load models
2022-06-13 02:39:53 -06:00
noise_scheduler = DDPMScheduler.from_config("fusing/ddpm-lsun-church", tensor_format="pt")
2022-06-10 06:50:57 -06:00
unet = UNetModel.from_pretrained("fusing/ddpm-lsun-church").to(torch_device)
2022-06-06 09:43:36 -06:00
# 2. Sample gaussian noise
2022-06-12 16:14:03 -06:00
image = torch.randn(
2022-06-15 04:41:57 -06:00
(1, unet.in_channels, unet.resolution, unet.resolution),
generator=generator,
2022-06-12 16:14:03 -06:00
)
image = image.to(torch_device)
2022-06-06 09:43:36 -06:00
2022-06-12 16:14:03 -06:00
# 3. Denoise
2022-06-10 06:32:42 -06:00
num_prediction_steps = len(noise_scheduler)
for t in tqdm.tqdm(reversed(range(num_prediction_steps)), total=num_prediction_steps):
2022-06-15 04:41:57 -06:00
# predict noise residual
with torch.no_grad():
2022-06-15 05:27:05 -06:00
residual = unet(image, t)
2022-06-06 09:43:36 -06:00
2022-06-15 04:41:57 -06:00
# predict previous mean of image x_t-1
pred_prev_image = noise_scheduler.step(residual, image, t)
2022-06-10 06:32:42 -06:00
2022-06-15 04:41:57 -06:00
# optionally sample variance
variance = 0
if t > 0:
noise = torch.randn(image.shape, generator=generator).to(image.device)
2022-06-15 05:27:05 -06:00
variance = noise_scheduler.get_variance(t).sqrt() * noise
2022-06-10 06:32:42 -06:00
2022-06-15 04:41:57 -06:00
# set current image to prev_image: x_t -> x_t-1
image = pred_prev_image + variance
2022-06-10 06:32:42 -06:00
# 5. process image to PIL
image_processed = image.cpu().permute(0, 2, 3, 1)
image_processed = (image_processed + 1.0) * 127.5
image_processed = image_processed.numpy().astype(np.uint8)
image_pil = PIL.Image.fromarray(image_processed[0])
# 6. save image
image_pil.save("test.png")
```
2022-06-11 16:24:20 -06:00
#### **Example for [DDIM](https://arxiv.org/abs/2010.02502):**
2022-06-10 06:32:42 -06:00
```python
import torch
from diffusers import UNetModel, DDIMScheduler
import PIL
import numpy as np
2022-06-10 06:37:58 -06:00
import tqdm
2022-06-10 06:32:42 -06:00
generator = torch.manual_seed(0)
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
# 1. Load models
2022-06-12 16:17:26 -06:00
noise_scheduler = DDIMScheduler.from_config("fusing/ddpm-celeba-hq", tensor_format="pt")
2022-06-10 06:50:57 -06:00
unet = UNetModel.from_pretrained("fusing/ddpm-celeba-hq").to(torch_device)
2022-06-10 06:32:42 -06:00
# 2. Sample gaussian noise
2022-06-12 16:14:03 -06:00
image = torch.randn(
2022-06-16 02:22:55 -06:00
(1, unet.in_channels, unet.resolution, unet.resolution),
generator=generator,
2022-06-12 16:14:03 -06:00
)
image = image.to(torch_device)
2022-06-10 06:32:42 -06:00
# 3. Denoise
num_inference_steps = 50
eta = 0.0 # < - deterministic sampling
for t in tqdm.tqdm(reversed(range(num_inference_steps)), total=num_inference_steps):
2022-06-16 09:49:35 -06:00
# 1. predict noise residual
2022-06-28 17:01:41 -06:00
orig_t = len(noise_scheduler) // num_inference_steps * t
2022-06-20 04:48:04 -06:00
2022-06-16 09:54:47 -06:00
with torch.inference_mode():
residual = unet(image, orig_t)
2022-06-16 09:49:35 -06:00
# 2. predict previous mean of image x_t-1
pred_prev_image = noise_scheduler.step(residual, image, t, num_inference_steps, eta)
# 3. optionally sample variance
variance = 0
if eta > 0:
noise = torch.randn(image.shape, generator=generator).to(image.device)
variance = noise_scheduler.get_variance(t).sqrt() * eta * noise
# 4. set current image to prev_image: x_t -> x_t-1
image = pred_prev_image + variance
2022-06-10 06:32:42 -06:00
# 5. process image to PIL
2022-06-06 09:43:36 -06:00
image_processed = image.cpu().permute(0, 2, 3, 1)
image_processed = (image_processed + 1.0) * 127.5
image_processed = image_processed.numpy().astype(np.uint8)
image_pil = PIL.Image.fromarray(image_processed[0])
2022-06-06 10:19:02 -06:00
2022-06-10 06:32:42 -06:00
# 6. save image
2022-06-06 09:43:36 -06:00
image_pil.save("test.png")
2022-06-02 07:59:58 -06:00
```
2022-06-17 11:22:55 -06:00
#### **Examples for other modalities:**
2022-06-27 13:21:46 -06:00
[Diffuser ](https://diffusion-planning.github.io/ ) for planning in reinforcement learning (currenlty only inference): [![Open In Colab ](https://colab.research.google.com/assets/colab-badge.svg )](https://colab.research.google.com/drive/1TmBmlYeKUZSkUZoJqfBmaicVTKx6nN1R?usp=sharing)
2022-06-17 11:22:55 -06:00
2022-06-22 06:38:36 -06:00
### 2. `diffusers` as a collection of popular Diffusion systems (Glide, Dalle, ...)
2022-06-15 04:25:48 -06:00
For more examples see [pipelines ](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines ).
2022-06-02 07:59:58 -06:00
2022-06-15 04:25:48 -06:00
#### **Example image generation with PNDM**
2022-06-02 07:59:58 -06:00
```python
2022-06-15 04:25:48 -06:00
from diffusers import PNDM, UNetModel, PNDMScheduler
2022-06-06 10:19:02 -06:00
import PIL.Image
import numpy as np
2022-06-15 04:25:48 -06:00
import torch
model_id = "fusing/ddim-celeba-hq"
model = UNetModel.from_pretrained(model_id)
scheduler = PNDMScheduler()
2022-06-02 07:59:58 -06:00
2022-06-06 10:19:02 -06:00
# load model and scheduler
2022-06-15 07:59:16 -06:00
pndm = PNDM(unet=model, noise_scheduler=scheduler)
2022-06-06 10:19:02 -06:00
# run pipeline in inference (sample random noise and denoise)
2022-06-15 04:25:48 -06:00
with torch.no_grad():
2022-06-15 07:59:16 -06:00
image = pndm()
2022-06-02 07:59:58 -06:00
2022-06-06 10:19:02 -06:00
# process image to PIL
2022-06-06 10:17:15 -06:00
image_processed = image.cpu().permute(0, 2, 3, 1)
2022-06-15 04:25:48 -06:00
image_processed = (image_processed + 1.0) / 2
image_processed = torch.clamp(image_processed, 0.0, 1.0)
image_processed = image_processed * 255
2022-06-06 10:17:15 -06:00
image_processed = image_processed.numpy().astype(np.uint8)
image_pil = PIL.Image.fromarray(image_processed[0])
2022-06-06 10:19:02 -06:00
# save image
2022-06-06 10:17:15 -06:00
image_pil.save("test.png")
2022-06-02 07:59:58 -06:00
```
2022-06-25 12:25:43 -06:00
#### **Example 1024x1024 image generation with SDE VE**
See [paper ](https://arxiv.org/abs/2011.13456 ) for more information on SDE VE.
```python
from diffusers import DiffusionPipeline
import torch
import PIL.Image
2022-06-26 18:09:49 -06:00
import numpy as np
2022-06-25 12:25:43 -06:00
torch.manual_seed(32)
score_sde_sv = DiffusionPipeline.from_pretrained("fusing/ffhq_ncsnpp")
# Note this might take up to 3 minutes on a GPU
image = score_sde_sv(num_inference_steps=2000)
image = image.permute(0, 2, 3, 1).cpu().numpy()
image = np.clip(image * 255, 0, 255).astype(np.uint8)
image_pil = PIL.Image.fromarray(image[0])
# save image
image_pil.save("test.png")
```
2022-06-26 18:09:49 -06:00
#### **Example 32x32 image generation with SDE VP**
See [paper ](https://arxiv.org/abs/2011.13456 ) for more information on SDE VE.
```python
from diffusers import DiffusionPipeline
import torch
import PIL.Image
import numpy as np
torch.manual_seed(32)
score_sde_sv = DiffusionPipeline.from_pretrained("fusing/cifar10-ddpmpp-deep-vp")
# Note this might take up to 3 minutes on a GPU
image = score_sde_sv(num_inference_steps=1000)
image = image.permute(0, 2, 3, 1).cpu().numpy()
image = np.clip(image * 255, 0, 255).astype(np.uint8)
image_pil = PIL.Image.fromarray(image[0])
# save image
image_pil.save("test.png")
```
2022-06-25 12:25:43 -06:00
2022-06-13 09:16:40 -06:00
#### **Text to Image generation with Latent Diffusion**
2022-06-10 08:33:58 -06:00
2022-06-15 02:42:37 -06:00
_Note: To use latent diffusion install transformers from [this branch ](https://github.com/patil-suraj/transformers/tree/ldm-bert )._
2022-06-10 08:33:58 -06:00
```python
from diffusers import DiffusionPipeline
ldm = DiffusionPipeline.from_pretrained("fusing/latent-diffusion-text2im-large")
2022-06-15 02:42:37 -06:00
generator = torch.manual_seed(42)
2022-06-10 08:33:58 -06:00
prompt = "A painting of a squirrel eating a burger"
image = ldm([prompt], generator=generator, eta=0.3, guidance_scale=6.0, num_inference_steps=50)
image_processed = image.cpu().permute(0, 2, 3, 1)
image_processed = image_processed * 255.
image_processed = image_processed.numpy().astype(np.uint8)
image_pil = PIL.Image.fromarray(image_processed[0])
# save image
image_pil.save("test.png")
```
2022-06-22 07:40:08 -06:00
#### **Text to speech with GradTTS and BDDMPipeline**
2022-06-13 09:13:31 -06:00
```python
import torch
2022-06-22 10:38:32 -06:00
from diffusers import BDDMPipeline, GradTTSPipeline
2022-06-13 09:13:31 -06:00
torch_device = "cuda"
2022-06-16 10:33:49 -06:00
# load grad tts and bddm pipelines
2022-06-22 10:38:32 -06:00
grad_tts = GradTTSPipeline.from_pretrained("fusing/grad-tts-libri-tts")
2022-06-22 07:40:08 -06:00
bddm = BDDMPipeline.from_pretrained("fusing/diffwave-vocoder-ljspeech")
2022-06-13 09:13:31 -06:00
text = "Hello world, I missed you so much."
2022-06-13 09:15:52 -06:00
# generate mel spectograms using text
2022-06-16 10:34:56 -06:00
mel_spec = grad_tts(text, torch_device=torch_device)
2022-06-13 09:13:31 -06:00
2022-06-22 07:40:08 -06:00
# generate the speech by passing mel spectograms to BDDMPipeline pipeline
2022-06-16 10:33:49 -06:00
generator = torch.manual_seed(42)
2022-06-16 10:34:56 -06:00
audio = bddm(mel_spec, generator, torch_device=torch_device)
2022-06-13 09:13:31 -06:00
2022-06-13 09:15:52 -06:00
# save generated audio
2022-06-13 09:13:31 -06:00
from scipy.io.wavfile import write as wavwrite
sampling_rate = 22050
wavwrite("generated_audio.wav", sampling_rate, audio.squeeze().cpu().numpy())
```
2022-06-15 07:44:38 -06:00
## TODO
2022-06-21 01:23:26 -06:00
- [ ] Create common API for models
- [ ] Add tests for models
- [ ] Adapt schedulers for training
- [ ] Write google colab for training
- [ ] Write docs / Think about how to structure docs
- [ ] Add tests to circle ci
- [ ] Add [Diffusion LM models ](https://arxiv.org/pdf/2205.14217.pdf )
- [ ] Add more vision models
- [ ] Add more speech models
- [ ] Add RL model
2022-06-22 09:26:07 -06:00
- [ ] Add FID and KID metrics