* make accelerate hard dep
* default fast init
* move params to cpu when device map is None
* handle device_map=None
* handle torch < 1.9
* remove device_map="auto"
* style
* add accelerate in torch extra
* remove accelerate from extras["test"]
* raise an error if torch is available but not accelerate
* update installation docs
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* improve defautl loading speed even further, allow disabling fats loading
* address review comments
* adapt the tests
* fix test_stable_diffusion_fast_load
* fix test_read_init
* temp fix for dummy checks
* Trigger Build
* Apply suggestions from code review
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* Changes for VQ-diffusion VQVAE
Add specify dimension of embeddings to VQModel:
`VQModel` will by default set the dimension of embeddings to the number
of latent channels. The VQ-diffusion VQVAE has a smaller
embedding dimension, 128, than number of latent channels, 256.
Add AttnDownEncoderBlock2D and AttnUpDecoderBlock2D to the up and down
unet block helpers. VQ-diffusion's VQVAE uses those two block types.
* Changes for VQ-diffusion transformer
Modify attention.py so SpatialTransformer can be used for
VQ-diffusion's transformer.
SpatialTransformer:
- Can now operate over discrete inputs (classes of vector embeddings) as well as continuous.
- `in_channels` was made optional in the constructor so two locations where it was passed as a positional arg were moved to kwargs
- modified forward pass to take optional timestep embeddings
ImagePositionalEmbeddings:
- added to provide positional embeddings to discrete inputs for latent pixels
BasicTransformerBlock:
- norm layers were made configurable so that the VQ-diffusion could use AdaLayerNorm with timestep embeddings
- modified forward pass to take optional timestep embeddings
CrossAttention:
- now may optionally take a bias parameter for its query, key, and value linear layers
FeedForward:
- Internal layers are now configurable
ApproximateGELU:
- Activation function in VQ-diffusion's feedforward layer
AdaLayerNorm:
- Norm layer modified to incorporate timestep embeddings
* Add VQ-diffusion scheduler
* Add VQ-diffusion pipeline
* Add VQ-diffusion convert script to diffusers
* Add VQ-diffusion dummy objects
* Add VQ-diffusion markdown docs
* Add VQ-diffusion tests
* some renaming
* some fixes
* more renaming
* correct
* fix typo
* correct weights
* finalize
* fix tests
* Apply suggestions from code review
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* finish
* finish
* up
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* feat: add repaint
* fix: fix quality check with `make fix-copies`
* fix: remove old unnecessary arg
* chore: change default to DDPM (looks better in experiments)
* ".to(device)" changed to "device="
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* make generator device-specific
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* make generator device-specific and change shape
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* fix: add preprocessing for image and mask
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* fix: update test
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* Update src/diffusers/pipelines/repaint/pipeline_repaint.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add docs and examples
* Fix toctree
Co-authored-by: fja <fja@zurich.ibm.com>
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* improve test precision
get tests passing with greater precision using lewington images
* make old numpy load function a wrapper around a more flexible numpy loading function
* adhere to black formatting
* add more black formatting
* adhere to isort
* loosen precision and replace path
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [Better scheduler docs] Improve usage examples of schedulers
* finish
* fix warnings and add test
* finish
* more replacements
* adapt fast tests hf token
* correct more
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Integrate compatibility with euler
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* start
* add more logic
* Update src/diffusers/models/unet_2d_condition_flax.py
* match weights
* up
* make model work
* making class more general, fixing missed file rename
* small fix
* make new conversion work
* up
* finalize conversion
* up
* first batch of variable renamings
* remove c and c_prev var names
* add mid and out block structure
* add pipeline
* up
* finish conversion
* finish
* upload
* more fixes
* Apply suggestions from code review
* add attr
* up
* uP
* up
* finish tests
* finish
* uP
* finish
* fix test
* up
* naming consistency in tests
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Nathan Lambert <nathan@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* remove hardcoded 16
* Remove bogus
* fix some stuff
* finish
* improve logging
* docs
* upload
Co-authored-by: Nathan Lambert <nol@berkeley.edu>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Nathan Lambert <nathan@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>