From 17b8adeb0e46636beb6b8375d4f5d4235f182fc0 Mon Sep 17 00:00:00 2001 From: Patrick von Platen Date: Thu, 1 Sep 2022 10:32:25 +0200 Subject: [PATCH 1/3] Update README.md --- src/diffusers/pipelines/README.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/src/diffusers/pipelines/README.md b/src/diffusers/pipelines/README.md index f79d96fb..20dff4c6 100644 --- a/src/diffusers/pipelines/README.md +++ b/src/diffusers/pipelines/README.md @@ -77,9 +77,7 @@ all of our pipelines to be **self-contained**, **easy-to-tweak**, **beginner-fr - **Easy-to-use**: Pipelines should be extremely easy to use - one should be able to load the pipeline and use it for its designated task, *e.g.* text-to-image generation, in just a couple of lines of code. Most logic including pre-processing, an unrolled diffusion loop, and post-processing should all happen inside the `__call__` method. -- **Easy-to-tweak**: Certain pipelines will not be able to handle all use cases and tasks that you might like them to. If you want to use a certain pipeline for a specific use case that is not yet supported, you might have to copy the pipeline file and tweak the code to your needs. - -We try to make the pipeline code as readable as possible so that each part –from pre-processing to diffusing to post-processing– can easily be adapted. If you would like the community to benefit from your customized pipeline, we would love to see a contribution to our [community-examples](https://github.com/huggingface/diffusers/tree/main/examples/commmunity). If you feel that an important pipeline should be part of the official pipelines but isn't, a contribution to the [official pipelines](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines) would be even better. +- **Easy-to-tweak**: Certain pipelines will not be able to handle all use cases and tasks that you might like them to. If you want to use a certain pipeline for a specific use case that is not yet supported, you might have to copy the pipeline file and tweak the code to your needs. We try to make the pipeline code as readable as possible so that each part –from pre-processing to diffusing to post-processing– can easily be adapted. If you would like the community to benefit from your customized pipeline, we would love to see a contribution to our [community-examples](https://github.com/huggingface/diffusers/tree/main/examples/commmunity). If you feel that an important pipeline should be part of the official pipelines but isn't, a contribution to the [official pipelines](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines) would be even better. - **One-purpose-only**: Pipelines should be used for one task and one task only. Even if two tasks are very similar from a modeling point of view, *e.g.* image2image translation and in-painting, pipelines shall be used for one task only to keep them *easy-to-tweak* and *readable*. ## Examples From 034673bbeb00452ed7167df35adbee5d436d3d52 Mon Sep 17 00:00:00 2001 From: Kirill Date: Thu, 1 Sep 2022 12:29:34 +0300 Subject: [PATCH 2/3] Fix stable-diffusion-seeds.ipynb link (#309) --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 4522c91f..931f854d 100644 --- a/README.md +++ b/README.md @@ -152,7 +152,7 @@ images[0].save("cat_on_bench.png") ### Tweak prompts reusing seeds and latents -You can generate your own latents to reproduce results, or tweak your prompt on a specific result you liked. [This notebook](stable-diffusion-seeds.ipynb) shows how to do it step by step. You can also run it in Google Colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb). +You can generate your own latents to reproduce results, or tweak your prompt on a specific result you liked. [This notebook](https://github.com/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb) shows how to do it step by step. You can also run it in Google Colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb). For more details, check out [the Stable Diffusion notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_diffusion.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_diffusion.ipynb) From 1f196a09fea0bb62308a31b66f1c398ff851959d Mon Sep 17 00:00:00 2001 From: Juan Carrasquilla <68667541+JC-swEng@users.noreply.github.com> Date: Thu, 1 Sep 2022 04:31:02 -0500 Subject: [PATCH 3/3] Changed variable name from "h" to "hidden_states" (#285) * Changed variable name from "h" to "hidden_states" Per issue #198 , changed variable name from "h" to "hidden_states" in the forward function only. I am happy to change any other variable names, please advise recommended new names. * Update src/diffusers/models/resnet.py Co-authored-by: Patrick von Platen Co-authored-by: Patrick von Platen --- src/diffusers/models/resnet.py | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/src/diffusers/models/resnet.py b/src/diffusers/models/resnet.py index acce7b57..50382bca 100644 --- a/src/diffusers/models/resnet.py +++ b/src/diffusers/models/resnet.py @@ -328,39 +328,39 @@ class ResnetBlock2D(nn.Module): if self.use_nin_shortcut: self.conv_shortcut = torch.nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0) - def forward(self, x, temb, hey=False): - h = x + def forward(self, x, temb): + hidden_states = x # make sure hidden states is in float32 # when running in half-precision - h = self.norm1(h.float()).type(h.dtype) - h = self.nonlinearity(h) + hidden_states = self.norm1(hidden_states.float()).type(hidden_states.dtype) + hidden_states = self.nonlinearity(hidden_states) if self.upsample is not None: x = self.upsample(x) - h = self.upsample(h) + hidden_states = self.upsample(hidden_states) elif self.downsample is not None: x = self.downsample(x) - h = self.downsample(h) + hidden_states = self.downsample(hidden_states) - h = self.conv1(h) + hidden_states = self.conv1(hidden_states) if temb is not None: temb = self.time_emb_proj(self.nonlinearity(temb))[:, :, None, None] - h = h + temb + hidden_states = hidden_states + temb # make sure hidden states is in float32 # when running in half-precision - h = self.norm2(h.float()).type(h.dtype) - h = self.nonlinearity(h) + hidden_states = self.norm2(hidden_states.float()).type(hidden_states.dtype) + hidden_states = self.nonlinearity(hidden_states) - h = self.dropout(h) - h = self.conv2(h) + hidden_states = self.dropout(hidden_states) + hidden_states = self.conv2(hidden_states) if self.conv_shortcut is not None: x = self.conv_shortcut(x) - out = (x + h) / self.output_scale_factor + out = (x + hidden_states) / self.output_scale_factor return out