* Add heun
* Finish first version of heun
* remove bogus
* finish
* finish
* improve
* up
* up
* fix more
* change progress bar
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
* finish
* up
* up
* up
* [Proposal] Support loading from safetensors if file is present.
* Style.
* Fix.
* Adding some test to check loading logic.
+ modify download logic to not download pytorch file if not necessary.
* Fixing the logic.
* Adressing comments.
* factor out into a function.
* Remove dead function.
* Typo.
* Extra fetch only if safetensors is there.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Adapt ddpm, ddpmsolver to prediction_type.
* Deprecate predict_epsilon in __init__.
* Bring FlaxDDIMScheduler up to date with DDIMScheduler.
* Set prediction_type as an ivar for consistency.
* Convert pipeline_ddpm
* Adapt tests.
* Adapt unconditional training script.
* Adapt BitDiffusion example.
* Add missing kwargs in dpmsolver_multistep
* Ugly workaround to accept deprecated predict_epsilon when loading
schedulers using from_pretrained.
* make style
* Remove import no longer in use.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Use config.prediction_type everywhere
* Add a couple of Flax prediction type tests.
* make style
* fix register deprecated arg
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* fix non square images with UNet2DModel and DDIM/DDPM pipelines
* fix unet_2d `sample_size` docstring
* update pipeline tests for unet uncond
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add conversion script for vae
* uP
* uP
* more changes
* push
* up
* finish again
* up
* up
* up
* up
* finish
* up
* uP
* up
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* up
* up
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Match the generator device to the pipeline for DDPM and DDIM
* style
* fix
* update values
* fix fast tests
* trigger slow tests
* deprecate
* last value fixes
* mps fixes
* [Scheduler] Move predict epsilon to init
* up
* uP
* uP
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* up
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Schedulers: don't use float64 on mps
* Test set_timesteps() on device (float schedulers).
* SD pipeline: use device in set_timesteps.
* SD in-painting pipeline: use device in set_timesteps.
* Tests: fix mps crashes.
* Skip test_load_pipeline_from_git on mps.
Not compatible with float16.
* Use device.type instead of str in Euler schedulers.
* make accelerate hard dep
* default fast init
* move params to cpu when device map is None
* handle device_map=None
* handle torch < 1.9
* remove device_map="auto"
* style
* add accelerate in torch extra
* remove accelerate from extras["test"]
* raise an error if torch is available but not accelerate
* update installation docs
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* improve defautl loading speed even further, allow disabling fats loading
* address review comments
* adapt the tests
* fix test_stable_diffusion_fast_load
* fix test_read_init
* temp fix for dummy checks
* Trigger Build
* Apply suggestions from code review
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* Fix equality test for ddim and ddpm
* add docs for use_clipped_model_output in DDIM
* fix inline comment
* reorder imports in test_pipelines.py
* Ignore use_clipped_model_output if scheduler doesn't take it
* add method to enable cuda with minimal gpu usage to stable diffusion
* add test to minimal cuda memory usage
* ensure all models but unet are onn torch.float32
* move to cpu_offload along with minor internal changes to make it work
* make it test against accelerate master branch
* coming back, its official: I don't know how to make it test againt the master branch from accelerate
* make it install accelerate from master on tests
* go back to accelerate>=0.11
* undo prettier formatting on yml files
* undo prettier formatting on yml files againn
* begin pipe
* add new pipeline
* add tests
* correct fast test
* up
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
* Update tests/test_pipelines.py
* up
* up
* make style
* add fp16 test
* doc, comments
* up
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* [CI] Add Apple M1 tests
* setup-python
* python build
* conda install
* remove branch
* only 3.8 is built for osx-arm
* try fetching prebuilt tokenizers
* use user cache
* update shells
* Reports and cleanup
* -> MPS
* Disable parallel tests
* Better naming
* investigate worker crash
* return xdist
* restart
* num_workers=2
* still crashing?
* faulthandler for segfaults
* faulthandler for segfaults
* remove restarts, stop on segfault
* torch version
* change installation order
* Use pre-RC version of PyTorch.
To be updated when it is released.
* Skip crashing test on MPS, add new one that works.
* Skip cuda tests in mps device.
* Actually use generator in test.
I think this was a typo.
* make style
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Bump to 0.6.0.dev0
* Deprecate tensor_format and .samples
* style
* upd
* upd
* style
* sample -> images
* Update src/diffusers/schedulers/scheduling_ddpm.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_ddim.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_karras_ve.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_lms_discrete.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_pndm.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_sde_ve.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_sde_vp.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Give more customizable options for safety checker
* Apply suggestions from code review
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
* Finish
* make style
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* up
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* [Dummy imports] Better error message
* Test: load pipeline with LMS scheduler.
Fails with a cryptic message if scipy is not installed.
* Correct
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add accelerate to load models with smaller memory footprint
* remove low_cpu_mem_usage as it is reduntant
* move accelerate init weights context to modelling utils
* add test to ensure results are the same when loading with accelerate
* add tests to ensure ram usage gets lower when using accelerate
* move accelerate logic to single snippet under modelling utils and remove it from configuration utils
* format code using to pass quality check
* fix imports with isor
* add accelerate to test extra deps
* only import accelerate if device_map is set to auto
* move accelerate availability check to diffusers import utils
* format code
* add device map to pipeline abstraction
* lint it to pass PR quality check
* fix class check to use accelerate when using diffusers ModelMixin subclasses
* use low_cpu_mem_usage in transformers if device_map is not available
* NoModuleLayer
* comment out tests
* up
* uP
* finish
* Update src/diffusers/pipelines/stable_diffusion/safety_checker.py
* finish
* uP
* make style
Co-authored-by: Pi Esposito <piero.skywalker@gmail.com>
* handle dtype in vae and image2image pipeline
* fix inpaint in fp16
* dtype should be handled in add_noise
* style
* address review comments
* add simple fast tests to check fp16
* fix test name
* put mask in fp16
This is to ensure that the final latent slices stay somewhat consistent as more changes are introduced into the library.
Signed-off-by: James R T <jamestiotio@gmail.com>
Signed-off-by: James R T <jamestiotio@gmail.com>
* Swap fp16 error to warning
Also remove the associated test
* Formatting
* warn -> warning
* Update src/diffusers/pipeline_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make style
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Raise an error when moving an fp16 pipeline to CPU
* Raise an error when moving an fp16 pipeline to CPU
* style
* Update src/diffusers/pipeline_utils.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/pipeline_utils.py
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Improve the message
* cuda
* Update tests/test_pipelines.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>