* make attn slice recursive
* remove set_attention_slice from blocks
* fix copies
* make enable_attention_slicing base class method of DiffusionPipeline
* fix set_attention_slice
* fix set_attention_slice
* fix copies
* add tests
* up
* up
* up
* update
* up
* uP
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add parameter safe_serialization to DiffusionPipeline.save_pretrained
* Add option safe_serialization on ModelMixin.save_pretrained
* Add test test_save_safe_serialization
* Black
* Re-trigger the CI
* Fix doc-builder
* Validate files are saved as safetensor in test_save_safe_serialization
* feat: switch core pipelines to use image arg
* test: update tests for core pipelines
* feat: switch examples to use image arg
* docs: update docs to use image arg
* style: format code using black and doc-builder
* fix: deprecate use of init_image in all pipelines
* Flax: start adapting to Stable Diffusion 2
* More changes.
* attention_head_dim can be a tuple.
* Fix typos
* Add simple SD 2 integration test.
Slice values taken from my Ampere GPU.
* Add simple UNet integration tests for Flax.
Note that the expected values are taken from the PyTorch results. This
ensures the Flax and PyTorch versions are not too far off.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Typos and style
* Tests: verify jax is available.
* Style
* Make flake happy
* Remove typo.
* Simple Flax SD 2 pipeline tests.
* Import order
* Remove unused import.
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: @camenduru
* Add heun
* Finish first version of heun
* remove bogus
* finish
* finish
* improve
* up
* up
* fix more
* change progress bar
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
* finish
* up
* up
* up
* [Proposal] Support loading from safetensors if file is present.
* Style.
* Fix.
* Adding some test to check loading logic.
+ modify download logic to not download pytorch file if not necessary.
* Fixing the logic.
* Adressing comments.
* factor out into a function.
* Remove dead function.
* Typo.
* Extra fetch only if safetensors is there.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* StableDiffusionUpscalePipeline
* fix a few things
* make it better
* fix image batching
* run vae in fp32
* fix docstr
* resize to mul of 64
* doc
* remove safety_checker
* add max_noise_level
* fix Copied
* begin tests
* slow tests
* default max_noise_level
* remove kwargs
* doc
* fix
* fix fast tests
* fix fast tests
* no sf
* don't offload vae
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Adapt ddpm, ddpmsolver to prediction_type.
* Deprecate predict_epsilon in __init__.
* Bring FlaxDDIMScheduler up to date with DDIMScheduler.
* Set prediction_type as an ivar for consistency.
* Convert pipeline_ddpm
* Adapt tests.
* Adapt unconditional training script.
* Adapt BitDiffusion example.
* Add missing kwargs in dpmsolver_multistep
* Ugly workaround to accept deprecated predict_epsilon when loading
schedulers using from_pretrained.
* make style
* Remove import no longer in use.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Use config.prediction_type everywhere
* Add a couple of Flax prediction type tests.
* make style
* fix register deprecated arg
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* up
* convert dual unet
* revert dual attn
* adapt for vd-official
* test the full pipeline
* mixed inference
* mixed inference for text2img
* add image prompting
* fix clip norm
* split text2img and img2img
* fix format
* refactor text2img
* mega pipeline
* add optimus
* refactor image var
* wip text_unet
* text unet end to end
* update tests
* reshape
* fix image to text
* add some first docs
* dual guided pipeline
* fix token ratio
* propose change
* dual transformer as a native module
* DualTransformer(nn.Module)
* DualTransformer(nn.Module)
* correct unconditional image
* save-load with mega pipeline
* remove image to text
* up
* uP
* fix
* up
* final fix
* remove_unused_weights
* test updates
* save progress
* uP
* fix dual prompts
* some fixes
* finish
* style
* finish renaming
* up
* fix
* fix
* fix
* finish
Co-authored-by: anton-l <anton@huggingface.co>
* make sure fp16 runs well
* add fp16 test for superes
* Update src/diffusers/models/unet_2d.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* gen on cuda
* always run fast inferecne test on cpu
* run on cpu
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* fix non square images with UNet2DModel and DDIM/DDPM pipelines
* fix unet_2d `sample_size` docstring
* update pipeline tests for unet uncond
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Handle batches and Tensors in `prepare_mask_and_masked_image`
* `blackfy`
upgrade `black`
* handle mask as `np.array`
* add docstring
* revert `black` changes with smaller line length
* missing ValueError in docstring
* raise `TypeError` for image as tensor but not mask
* typo in mask shape selection
* check for batch dim
* fix: wrong indentation
* add tests
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add legacy inpainting pipeline compatibility for onnx
* remove commented out line
* Add onnx legacy inpainting test
* Fix slow decorators
* pep8 styling
* isort styling
* dummy object
* ordering consistency
* style
* docstring styles
* Refactor common prompt encoding pattern
* Update tests to permanent repository home
* support all available schedulers until ONNX IO binding is available
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
* updated styling from PR suggested feedback
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>