* Update README.md
Additionally add FLAX so the model card can be slimmer and point to this page
* Find and replace all
* v-1-5 -> v1-5
* revert test changes
* Update README.md
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update docs/source/quicktour.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update README.md
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/quicktour.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update README.md
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Revert certain references to v1-5
* Docs changes
* Apply suggestions from code review
Co-authored-by: apolinario <joaopaulo.passos+multimodal@gmail.com>
Co-authored-by: anton-l <anton@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* mps: alt. implementation for repeat_interleave
* style
* Bump mps version of PyTorch in the documentation.
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Simplify: do not check for device.
* style
* Fix repeat dimensions:
- The unconditional embeddings are always created from a single prompt.
- I was shadowing the batch_size var.
* Split long lines as suggested by Suraj.
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* initial commit
* make UNet stream capturable
* try to fix noise_pred value
* remove cuda graph and keep NB
* non blocking unet with PNDMScheduler
* make timesteps np arrays for pndm scheduler
because lists don't get formatted to tensors in `self.set_format`
* make max async in pndm
* use channel last format in unet
* avoid moving timesteps device in each unet call
* avoid memcpy op in `get_timestep_embedding`
* add `channels_last` kwarg to `DiffusionPipeline.from_pretrained`
* update TODO
* replace `channels_last` kwarg with `memory_format` for more generality
* revert the channels_last changes to leave it for another PR
* remove non_blocking when moving input ids to device
* remove blocking from all .to() operations at beginning of pipeline
* fix merging
* fix merging
* model can run in other precisions without autocast
* attn refactoring
* Revert "attn refactoring"
This reverts commit 0c70c0e189cd2c4d8768274c9fcf5b940ee310fb.
* remove restriction to run conv_norm in fp32
* use `baddbmm` instead of `matmul`for better in attention for better perf
* removing all reshapes to test perf
* Revert "removing all reshapes to test perf"
This reverts commit 006ccb8a8c6bc7eb7e512392e692a29d9b1553cd.
* add shapes comments
* hardcore whats needed for jitting
* Revert "hardcore whats needed for jitting"
This reverts commit 2fa9c698eae2890ac5f8e367ca80532ecf94df9a.
* Revert "remove restriction to run conv_norm in fp32"
This reverts commit cec592890c32da3d1b78d38b49e4307aedf459b9.
* revert using baddmm in attention's forward
* cleanup comment
* remove restriction to run conv_norm in fp32. no quality loss was noticed
This reverts commit cc9bc1339c998ebe9e7d733f910c6d72d9792213.
* add more optimizations techniques to docs
* Revert "add shapes comments"
This reverts commit 31c58eadb8892f95478cdf05229adf678678c5f4.
* apply suggestions
* make quality
* apply suggestions
* styling
* `scheduler.timesteps` are now arrays so we dont need .to()
* remove useless .type()
* use mean instead of max in `test_stable_diffusion_inpaint_pipeline_k_lms`
* move scheduler timestamps to correct device if tensors
* add device to `set_timesteps` in LMSD scheduler
* `self.scheduler.set_timesteps` now uses device arg for schedulers that accept it
* quick fix
* styling
* remove kwargs from schedulers `set_timesteps`
* revert to using max in K-LMS inpaint pipeline test
* Revert "`self.scheduler.set_timesteps` now uses device arg for schedulers that accept it"
This reverts commit 00d5a51e5c20d8d445c8664407ef29608106d899.
* move timesteps to correct device before loop in SD pipeline
* apply previous fix to other SD pipelines
* UNet now accepts tensor timesteps even on wrong device, to avoid errors
- it shouldnt affect performance if timesteps are alrdy on correct device
- it does slow down performance if they're on the wrong device
* fix pipeline when timesteps are arrays with strides
* pytorch only schedulers
* fix style
* remove match_shape
* pytorch only ddpm
* remove SchedulerMixin
* remove numpy from karras_ve
* fix types
* remove numpy from lms_discrete
* remove numpy from pndm
* fix typo
* remove mixin and numpy from sde_vp and ve
* remove remaining tensor_format
* fix style
* sigmas has to be torch tensor
* removed set_format in readme
* remove set format from docs
* remove set_format from pipelines
* update tests
* fix typo
* continue to use mixin
* fix imports
* removed unsed imports
* match shape instead of assuming image shapes
* remove import typo
* update call to add_noise
* use math instead of numpy
* fix t_index
* removed commented out numpy tests
* timesteps needs to be discrete
* cast timesteps to int in flax scheduler too
* fix device mismatch issue
* small fix
* Update src/diffusers/schedulers/scheduling_pndm.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Fix typos
* Add a typo check action
* Fix a bug
* Changed to manual typo check currently
Ref: https://github.com/huggingface/diffusers/pull/483#pullrequestreview-1104468010
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* Removed a confusing message
* Renamed "nin_shortcut" to "in_shortcut"
* Add memo about NIN
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* docs for attention
* types for embeddings
* unet2d docstrings
* UNet2DConditionModel docstrings
* fix typos
* style and vq-vae docstrings
* docstrings for VAE
* Update src/diffusers/models/unet_2d.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make style
* added inherits from sentence
* docstring to forward
* make style
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* finish model docs
* up
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Initial version of `fp16` page.
* Fix typo in README.
* Change titles of fp16 section in toctree.
* PR suggestion
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* PR suggestion
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Clarify attention slicing is useful even for batches of 1
Explained by @patrickvonplaten after a suggestion by @keturn.
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Do not talk about `batches` in `enable_attention_slicing`.
* Use Tip (just for fun), add link to method.
* Comment about fp16 results looking the same as float32 in practice.
* Style: docstring line wrapping.
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* init schedulers docs
* add some docstrings, fix sidebar formatting
* add docstrings
* [Type hint] PNDM schedulers (#335)
* [Type hint] PNDM Schedulers
* ran make style
* updated timesteps type hint
* apply suggestions from code review
* ran make style
* removed unused import
* [Type hint] scheduling ddim (#343)
* [Type hint] scheduling ddim
* apply suggestions from code review
apply suggestions to also return the return type
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make style
* update class docstrings
* add docstrings
* missed merge edit
* add general docs page
* modify headings for right sidebar
Co-authored-by: Partho <parthodas6176@gmail.com>
Co-authored-by: Santiago Víquez <santi.viquez@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Initial description of Stable Diffusion pipeline.
* Placeholder docstrings to test preview.
* Add docstrings to Stable Diffusion pipeline.
* Style
* Docs for all the SD pipelines + attention slicing.
* Style: wrap long lines.
* Update text_inversion.mdx
Getting in a bit of background info
* fixed typo mode -> model
* Link SD and re-write a few bits for clarity
* Copied in info from the example script
As suggested by surajpatil :)
* removed an unnecessary heading