* Fix equality test for ddim and ddpm
* add docs for use_clipped_model_output in DDIM
* fix inline comment
* reorder imports in test_pipelines.py
* Ignore use_clipped_model_output if scheduler doesn't take it
* add method to enable cuda with minimal gpu usage to stable diffusion
* add test to minimal cuda memory usage
* ensure all models but unet are onn torch.float32
* move to cpu_offload along with minor internal changes to make it work
* make it test against accelerate master branch
* coming back, its official: I don't know how to make it test againt the master branch from accelerate
* make it install accelerate from master on tests
* go back to accelerate>=0.11
* undo prettier formatting on yml files
* undo prettier formatting on yml files againn
* begin pipe
* add new pipeline
* add tests
* correct fast test
* up
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
* Update tests/test_pipelines.py
* up
* up
* make style
* add fp16 test
* doc, comments
* up
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* [CI] Add Apple M1 tests
* setup-python
* python build
* conda install
* remove branch
* only 3.8 is built for osx-arm
* try fetching prebuilt tokenizers
* use user cache
* update shells
* Reports and cleanup
* -> MPS
* Disable parallel tests
* Better naming
* investigate worker crash
* return xdist
* restart
* num_workers=2
* still crashing?
* faulthandler for segfaults
* faulthandler for segfaults
* remove restarts, stop on segfault
* torch version
* change installation order
* Use pre-RC version of PyTorch.
To be updated when it is released.
* Skip crashing test on MPS, add new one that works.
* Skip cuda tests in mps device.
* Actually use generator in test.
I think this was a typo.
* make style
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Bump to 0.6.0.dev0
* Deprecate tensor_format and .samples
* style
* upd
* upd
* style
* sample -> images
* Update src/diffusers/schedulers/scheduling_ddpm.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_ddim.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_karras_ve.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_lms_discrete.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_pndm.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_sde_ve.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_sde_vp.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Give more customizable options for safety checker
* Apply suggestions from code review
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
* Finish
* make style
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* up
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* [Dummy imports] Better error message
* Test: load pipeline with LMS scheduler.
Fails with a cryptic message if scipy is not installed.
* Correct
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add accelerate to load models with smaller memory footprint
* remove low_cpu_mem_usage as it is reduntant
* move accelerate init weights context to modelling utils
* add test to ensure results are the same when loading with accelerate
* add tests to ensure ram usage gets lower when using accelerate
* move accelerate logic to single snippet under modelling utils and remove it from configuration utils
* format code using to pass quality check
* fix imports with isor
* add accelerate to test extra deps
* only import accelerate if device_map is set to auto
* move accelerate availability check to diffusers import utils
* format code
* add device map to pipeline abstraction
* lint it to pass PR quality check
* fix class check to use accelerate when using diffusers ModelMixin subclasses
* use low_cpu_mem_usage in transformers if device_map is not available
* NoModuleLayer
* comment out tests
* up
* uP
* finish
* Update src/diffusers/pipelines/stable_diffusion/safety_checker.py
* finish
* uP
* make style
Co-authored-by: Pi Esposito <piero.skywalker@gmail.com>
* handle dtype in vae and image2image pipeline
* fix inpaint in fp16
* dtype should be handled in add_noise
* style
* address review comments
* add simple fast tests to check fp16
* fix test name
* put mask in fp16
This is to ensure that the final latent slices stay somewhat consistent as more changes are introduced into the library.
Signed-off-by: James R T <jamestiotio@gmail.com>
Signed-off-by: James R T <jamestiotio@gmail.com>
* Swap fp16 error to warning
Also remove the associated test
* Formatting
* warn -> warning
* Update src/diffusers/pipeline_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make style
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Raise an error when moving an fp16 pipeline to CPU
* Raise an error when moving an fp16 pipeline to CPU
* style
* Update src/diffusers/pipeline_utils.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/pipeline_utils.py
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Improve the message
* cuda
* Update tests/test_pipelines.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Add callback parameters for Stable Diffusion pipelines
Signed-off-by: James R T <jamestiotio@gmail.com>
* Lint code with `black --preview`
Signed-off-by: James R T <jamestiotio@gmail.com>
* Refactor callback implementation for Stable Diffusion pipelines
* Fix missing imports
Signed-off-by: James R T <jamestiotio@gmail.com>
* Fix documentation format
Signed-off-by: James R T <jamestiotio@gmail.com>
* Add kwargs parameter to standardize with other pipelines
Signed-off-by: James R T <jamestiotio@gmail.com>
* Modify Stable Diffusion pipeline callback parameters
Signed-off-by: James R T <jamestiotio@gmail.com>
* Remove useless imports
Signed-off-by: James R T <jamestiotio@gmail.com>
* Change types for timestep and onnx latents
* Fix docstring style
* Return decode_latents and run_safety_checker back into __call__
* Remove unused imports
* Add intermediate state tests for Stable Diffusion pipelines
Signed-off-by: James R T <jamestiotio@gmail.com>
* Fix intermediate state tests for Stable Diffusion pipelines
Signed-off-by: James R T <jamestiotio@gmail.com>
Signed-off-by: James R T <jamestiotio@gmail.com>
* Allow resolutions that are not multiples of 64
* ran black
* fix bug
* add test
* more explanation
* more comments
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* pytorch only schedulers
* fix style
* remove match_shape
* pytorch only ddpm
* remove SchedulerMixin
* remove numpy from karras_ve
* fix types
* remove numpy from lms_discrete
* remove numpy from pndm
* fix typo
* remove mixin and numpy from sde_vp and ve
* remove remaining tensor_format
* fix style
* sigmas has to be torch tensor
* removed set_format in readme
* remove set format from docs
* remove set_format from pipelines
* update tests
* fix typo
* continue to use mixin
* fix imports
* removed unsed imports
* match shape instead of assuming image shapes
* remove import typo
* update call to add_noise
* use math instead of numpy
* fix t_index
* removed commented out numpy tests
* timesteps needs to be discrete
* cast timesteps to int in flax scheduler too
* fix device mismatch issue
* small fix
* Update src/diffusers/schedulers/scheduling_pndm.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* update expected results of slow tests
* relax sum and mean tests
* Print shapes when reporting exception
* formatting
* fix sentence
* relax test_stable_diffusion_fast_ddim for gpu fp16
* relax flakey tests on GPU
* added comment on large tolerences
* black
* format
* set scheduler seed
* added generator
* use np.isclose
* set num_inference_steps to 50
* fix dep. warning
* update expected_slice
* preprocess if image
* updated expected results
* updated expected from CI
* pass generator to VAE
* undo change back to orig
* use orignal
* revert back the expected on cpu
* revert back values for CPU
* more undo
* update result after using gen
* update mean
* set generator for mps
* update expected on CI server
* undo
* use new seed every time
* cpu manual seed
* reduce num_inference_steps
* style
* use generator for randn
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>