Commit Graph

355 Commits

Author SHA1 Message Date
Suraj Patil 0eb507f2af
StableDiffusionImageVariationPipeline (#1365)
* add StableDiffusionImageVariationPipeline

* add ini init

* use CLIPVisionModelWithProjection

* fix _encode_image

* add copied from

* fix copies

* add doc

* handle tensor in _encode_image

* add tests

* correct model_id

* remove copied from in enable_sequential_cpu_offload

* fix tests

* make slow tests pass

* update slow tests

* use temp model for now

* fix test_stable_diffusion_img_variation_intermediate_state

* fix test_stable_diffusion_img_variation_intermediate_state

* check for torch.Tensor

* quality

* fix name

* fix slow tests

* install transformers from source

* fix install

* fix install

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* input_image -> image

* remove deprication warnings

* fix test_stable_diffusion_img_variation_multiple_images

* make flake happy

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-23 14:36:39 +01:00
Suraj Patil 9e234d8048
handle fp16 in `UNet2DModel` (#1216)
* make sure fp16 runs well

* add fp16 test for superes

* Update src/diffusers/models/unet_2d.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* gen on cuda

* always run fast inferecne test on cpu

* run on cpu

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-23 11:13:34 +01:00
Penn 8fd3a74322
Fix using non-square images with UNet2DModel and DDIM/DDPM pipelines (#1289)
* fix non square images with UNet2DModel and DDIM/DDPM pipelines

* fix unet_2d `sample_size` docstring

* update pipeline tests for unet uncond

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-23 11:11:39 +01:00
Manuel Brack e50c25d808
Add Safe Stable Diffusion Pipeline (#1244)
* Add pipeline_stable_diffusion_safe.py to pipelines

* Fix repository consistency

Ran make fix-copies after adding new pipline

* Add Paper/Equation reference for parameters to doc string

* Ensure code style and quality

* Perform code refactoring

* Fix copies inherited from merge with huggingface/main

* Add docs

* Fix code style

* Fix errors in documentation

* Fix refactoring error

* remove debugging print statement

* added Safe Latent Diffusion tests

* Fix style

* Fix style

* Add pre-defined safety configurations

* Fix line-break

* fix some tests

* finish

* Change safety checker

* Add missing safety_checker.py file

* Remove unused imports

Co-authored-by: PatrickSchrML <patrick_schramowski@hotmail.de>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-22 11:51:30 +01:00
Patrick von Platen ab1f01e634 make style 2022-11-20 19:37:28 +01:00
Victor Schmidt 3bec90ff2c
Handle batches and Tensors in `pipeline_stable_diffusion_inpaint.py:prepare_mask_and_masked_image` (#1003)
* Handle batches and Tensors in `prepare_mask_and_masked_image`

* `blackfy`
upgrade `black`

* handle mask as `np.array`

* add docstring

* revert `black` changes with smaller line length

* missing ValueError in docstring

* raise `TypeError` for image as tensor but not mask

* typo in mask shape selection

* check for batch dim

* fix: wrong indentation

* add tests

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-20 19:33:09 +01:00
Clayton Sims 30220905c4
Legacy Inpainting Pipeline for Onnx Models (#1237)
* Add legacy inpainting pipeline compatibility for onnx

* remove commented out line

* Add onnx legacy inpainting test

* Fix slow decorators

* pep8 styling

* isort styling

* dummy object

* ordering consistency

* style

* docstring styles

* Refactor common prompt encoding pattern

* Update tests to permanent repository home

* support all available schedulers until ONNX IO binding is available

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* updated styling from PR suggested feedback

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
2022-11-18 16:33:12 +01:00
Patrick von Platen fcfdd95f0b
Fix/Enable all schedulers for in-painting (#1331)
* inpaint fix k lms

* onnox as well

* up
2022-11-18 12:32:17 +01:00
Patrick von Platen 632dacea2f
[Custom pipeline] Easier loading of local pipelines (#1327)
* [Custom pipeline] Easier loading of local pipelines

* upgrade black
2022-11-17 16:00:26 +01:00
Will Berman f1fcfdeec5
vq diffusion classifier free sampling (#1294)
* vq diffusion classifier free sampling

* correct

* uP

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-16 17:51:43 +01:00
Patrick von Platen 65d136e067
Add improved handling of pil (#1309)
* Better error message for transformers dummy

* [PIL] Better deprecation functionality

* up
2022-11-16 15:58:22 +01:00
Suraj Patil 46893adacd
[AltDiffusion] add tests (#1311)
* being tests

* fix model ids

* don't use safety checker in tests

* add im2img2 tests

* fix integration tests

* integration tests

* style

* add sentencepiece in test dep

* quality

* 4 decimalk points

* fix im2img test

* increase the tok slightly
2022-11-16 15:40:26 +01:00
Patrick von Platen a0520193e1
Add Scheduler.from_pretrained and better scheduler changing (#1286)
* add conversion script for vae

* uP

* uP

* more changes

* push

* up

* finish again

* up

* up

* up

* up

* finish

* up

* uP

* up

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>

* up

* up

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-11-15 18:15:13 +01:00
Nathan Lambert 7c5fef81e0
Add UNet 1d for RL model for planning + colab (#105)
* re-add RL model code

* match model forward api

* add register_to_config, pass training tests

* fix tests, update forward outputs

* remove unused code, some comments

* add to docs

* remove extra embedding code

* unify time embedding

* remove conv1d output sequential

* remove sequential from conv1dblock

* style and deleting duplicated code

* clean files

* remove unused variables

* clean variables

* add 1d resnet block structure for downsample

* rename as unet1d

* fix renaming

* rename files

* add get_block(...) api

* unify args for model1d like model2d

* minor cleaning

* fix docs

* improve 1d resnet blocks

* fix tests, remove permuts

* fix style

* add output activation

* rename flax blocks file

* Add Value Function and corresponding example script to Diffuser implementation (#884)

* valuefunction code

* start example scripts

* missing imports

* bug fixes and placeholder example script

* add value function scheduler

* load value function from hub and get best actions in example

* very close to working example

* larger batch size for planning

* more tests

* merge unet1d changes

* wandb for debugging, use newer models

* success!

* turns out we just need more diffusion steps

* run on modal

* merge and code cleanup

* use same api for rl model

* fix variance type

* wrong normalization function

* add tests

* style

* style and quality

* edits based on comments

* style and quality

* remove unused var

* hack unet1d into a value function

* add pipeline

* fix arg order

* add pipeline to core library

* community pipeline

* fix couple shape bugs

* style

* Apply suggestions from code review

Co-authored-by: Nathan Lambert <nathan@huggingface.co>

* update post merge of scripts

* add mdiblock / outblock architecture

* Pipeline cleanup (#947)

* valuefunction code

* start example scripts

* missing imports

* bug fixes and placeholder example script

* add value function scheduler

* load value function from hub and get best actions in example

* very close to working example

* larger batch size for planning

* more tests

* merge unet1d changes

* wandb for debugging, use newer models

* success!

* turns out we just need more diffusion steps

* run on modal

* merge and code cleanup

* use same api for rl model

* fix variance type

* wrong normalization function

* add tests

* style

* style and quality

* edits based on comments

* style and quality

* remove unused var

* hack unet1d into a value function

* add pipeline

* fix arg order

* add pipeline to core library

* community pipeline

* fix couple shape bugs

* style

* Apply suggestions from code review

* clean up comments

* convert older script to using pipeline and add readme

* rename scripts

* style, update tests

* delete unet rl model file

* remove imports in src

Co-authored-by: Nathan Lambert <nathan@huggingface.co>

* Update src/diffusers/models/unet_1d_blocks.py

* Update tests/test_models_unet.py

* RL Cleanup v2 (#965)

* valuefunction code

* start example scripts

* missing imports

* bug fixes and placeholder example script

* add value function scheduler

* load value function from hub and get best actions in example

* very close to working example

* larger batch size for planning

* more tests

* merge unet1d changes

* wandb for debugging, use newer models

* success!

* turns out we just need more diffusion steps

* run on modal

* merge and code cleanup

* use same api for rl model

* fix variance type

* wrong normalization function

* add tests

* style

* style and quality

* edits based on comments

* style and quality

* remove unused var

* hack unet1d into a value function

* add pipeline

* fix arg order

* add pipeline to core library

* community pipeline

* fix couple shape bugs

* style

* Apply suggestions from code review

* clean up comments

* convert older script to using pipeline and add readme

* rename scripts

* style, update tests

* delete unet rl model file

* remove imports in src

* add specific vf block and update tests

* style

* Update tests/test_models_unet.py

Co-authored-by: Nathan Lambert <nathan@huggingface.co>

* fix quality in tests

* fix quality style, split test file

* fix checks / tests

* make timesteps closer to main

* unify block API

* unify forward api

* delete lines in examples

* style

* examples style

* all tests pass

* make style

* make dance_diff test pass

* Refactoring RL PR (#1200)

* init file changes

* add import utils

* finish cleaning files, imports

* remove import flags

* clean examples

* fix imports, tests for merge

* update readmes

* hotfix for tests

* quality

* fix some tests

* change defaults

* more mps test fixes

* unet1d defaults

* do not default import experimental

* defaults for tests

* fix tests

* fix-copies

* fix

* changes per Patrik's comments (#1285)

* changes per Patrik's comments

* update conversion script

* fix renaming

* skip more mps tests

* last test fix

* Update examples/rl/README.md

Co-authored-by: Ben Glickenhaus <benglickenhaus@gmail.com>
2022-11-14 13:48:48 -08:00
Suraj Patil a8d0977769
[StableDiffusionInpaintPipeline] fix batch_size for mask and masked latents (#1279)
fix bs for mask and masked latents
2022-11-14 22:03:10 +01:00
Patrick von Platen b3c5e086e5
Finalize stable diffusion refactor (#1269)
* finish

* cleaner

* more fixes

* refactor

* make fix copies

* refactor cycle diffusion

* finish

* finish2

* Apply suggestions from code review
2022-11-13 23:54:30 +01:00
Patrick von Platen 4c660d16d0
[Stable Diffusion] Fix padding / truncation (#1226)
* [Stable Diffusion] Fix padding / truncation

* finish
2022-11-13 20:19:55 +01:00
Anton Lozhkov 2e980ac9a0
[Tests] Adjust TPU test values (#1233)
* [Tests] Adjust TPU test values

* slow tests

* remaining refs
2022-11-10 00:44:42 +01:00
Anton Lozhkov 0feb21a18c
[Tests] Fix mps+generator fast tests (#1230)
* [Tests] Fix mps+generator fast tests

* mps for Euler

* retry

* warmup issue again?

* fix reproducible initial noise

* Revert "fix reproducible initial noise"

This reverts commit f300d05cb9782ed320064a0c58577a32d4139e6d.

* fix reproducible initial noise

* fix device
2022-11-10 00:09:22 +01:00
Patrick von Platen 187de44352 Fix device on save/load tests 2022-11-09 22:18:14 +00:00
Anton Lozhkov 7d0c272939
Match the generator device to the pipeline for DDPM and DDIM (#1222)
* Match the generator device to the pipeline for DDPM and DDIM

* style

* fix

* update values

* fix fast tests

* trigger slow tests

* deprecate

* last value fixes

* mps fixes
2022-11-09 23:00:23 +01:00
Pedro Cuenca af279434d0
Flax tests: don't hardcode number of devices (#1175)
Flax tests: don't hardcode number of devices.

This makes it possible to test on CPU/GPU. However, expected slices are
only checked when there are 8 devices.
2022-11-09 20:04:43 +01:00
Duong A. Nguyen 5a59f9b717
Add LDM Super Resolution pipeline (#1116)
* Add ldm super resolution pipeline

* style

* fix copies

* style

* fix doc

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* add doc

* address comments

* address comments

* fix doc

* minor

* add tests

* add tests

* load text encoder from subfolder

* fix test

* fix test

* style

* style

* handle mps latents

* unfix typo

* unfix typo

* Update tests/pipelines/latent_diffusion/test_latent_diffusion_superresolution.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* fix set_timesteps mps

* fix set_timesteps mps

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* style

* test 64x64 instead of 256x256

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-09 13:42:16 +01:00
Patrick von Platen b93fe08545
[Loading] Make sure loading edge cases work (#1192)
* [Loading] Make edge cases work

* up

* finish

* up
2022-11-09 12:28:56 +01:00
Patrick von Platen 6cf72a9b1e
Fix slow tests (#1210)
* fix tests

* Fix more

* more
2022-11-09 11:22:12 +01:00
Anton Lozhkov 24895a1f49
Fix cpu offloading (#1177)
* Fix cpu offloading

* get offloaded devices locally for SD pipelines
2022-11-09 10:28:10 +01:00
Patrick von Platen 249d9bc0e7
[Scheduler] Move predict epsilon to init (#1155)
* [Scheduler] Move predict epsilon to init

* up

* uP

* uP

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* up

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-08 18:08:08 +01:00
Anton Lozhkov 11f7d6f3cc
[ONNX] Improve ONNXPipeline scheduler compatibility, fix safety_checker (#1173)
* [ONNX] Improve ONNX scheduler compatibility, fix safety_checker

* typo
2022-11-08 14:39:11 +01:00
Pedro Cuenca 813744e5f3
MPS schedulers: don't use float64 (#1169)
* Schedulers: don't use float64 on mps

* Test set_timesteps() on device (float schedulers).

* SD pipeline: use device in set_timesteps.

* SD in-painting pipeline: use device in set_timesteps.

* Tests: fix mps crashes.

* Skip test_load_pipeline_from_git on mps.

Not compatible with float16.

* Use device.type instead of str in Euler schedulers.
2022-11-08 13:11:33 +01:00
Cheng Lu b4a1ed8544
Add multistep DPM-Solver discrete scheduler (#1132)
* add dpmsolver discrete pytorch scheduler

* fix some typos in dpm-solver pytorch

* add dpm-solver pytorch in stable-diffusion pipeline

* add jax/flax version dpm-solver

* change code style

* change code style

* add docs

* add `add_noise` method for dpmsolver

* add pytorch unit test for dpmsolver

* add dummy object for pytorch dpmsolver

* Update src/diffusers/schedulers/scheduling_dpmsolver_discrete.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update tests/test_config.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update tests/test_config.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* resolve the code comments

* rename the file

* change class name

* fix code style

* add auto docs for dpmsolver multistep

* add more explanations for the stabilizing trick (for steps < 15)

* delete the dummy file

* change the API name of predict_epsilon, algorithm_type and solver_type

* add compatible lists

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-11-06 22:49:55 +01:00
Chen Wu (吴尘) 9d8943b7e7
Add CycleDiffusion pipeline using Stable Diffusion (#888)
* Add CycleDiffusion pipeline for Stable Diffusion

* Add the option of passing noise to DDIMScheduler

Add the option of providing the noise itself to DDIMScheduler, instead of the random seed generator.

* Update README.md

* Update README.md

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update scheduling_ddim.py

* Update import format

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update scheduling_ddim.py

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update scheduling_ddim.py

* Update scheduling_ddim.py

* Update scheduling_ddim.py

* add two tests

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Update README.md

* Rename pipeline name as suggested in the latest reviewer comment

* Update test_pipelines.py

* Update test_pipelines.py

* Update test_pipelines.py

* Update pipeline_stable_diffusion_cycle_diffusion.py

* Remove the generator

This generator does not control all randomness during sampling, which can be misleading.

* Update optimal hyperparameters

* Update src/diffusers/pipelines/stable_diffusion/README.md

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/README.md

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Update src/diffusers/pipelines/stable_diffusion/README.md

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Apply suggestions from code review

* uP

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_cycle_diffusion.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* up

* up

* Replace assert with ValueError

* finish docs

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-11-04 20:51:06 +01:00
Pi Esposito 1172c9634b
add enable sequential cpu offloading to other stable diffusion pipelines (#1085)
* add enable sequential cpu offloading to other stable diffusion pipelines

* trigger ci

* fix styling

* interpolate before converting to device to avoid breking when cpu_offload is enabled with fp16

Co-authored-by: Pedro Gengo  <pedro.gabriel.lourenco@hotmail.com>

* style again I need to stop forgething this thing

* fix inpainting bug that could cause device misalignment

Co-authored-by: Pedro Gengo  <pedro.gabriel.lourenco@hotmail.com>

* Apply suggestions from code review

Co-authored-by: Pedro Gengo  <pedro.gabriel.lourenco@hotmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-04 19:25:28 +01:00
Lewington-pitsos 2c108693cc
Test precision increases (#1113)
* increase the precision of slice-based tests and make the default test case easier to single out

* increase precision of unit tests which already rely on float comparisons

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-04 17:54:01 +01:00
Patrick von Platen 1d0f3c211e
Move accelerate to a soft-dependency (#1134)
* finish

* finish

* Update src/diffusers/modeling_utils.py

* Update src/diffusers/pipeline_utils.py

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* more fixes

* fix

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-11-04 14:58:52 +01:00
Patrick von Platen 42bb459457
[Low cpu memory] Correct naming and improve default usage (#1122)
* correct naming

* finish

* Apply suggestions from code review

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-11-03 18:11:18 +01:00
Suraj Patil 7482178162
default fast model loading 🔥 (#1115)
* make accelerate hard dep

* default fast init

* move params to cpu when device map is None

* handle device_map=None

* handle torch < 1.9

* remove device_map="auto"

* style

* add accelerate in torch extra

* remove accelerate from extras["test"]

* raise an error if torch is available but not accelerate

* update installation docs

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* improve defautl loading speed even further, allow disabling fats loading

* address review comments

* adapt the tests

* fix test_stable_diffusion_fast_load

* fix test_read_init

* temp fix for dummy checks

* Trigger Build

* Apply suggestions from code review

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-11-03 17:25:57 +01:00
Will Berman ef2ea33c3b
VQ-diffusion (#658)
* Changes for VQ-diffusion VQVAE

Add specify dimension of embeddings to VQModel:
`VQModel` will by default set the dimension of embeddings to the number
of latent channels. The VQ-diffusion VQVAE has a smaller
embedding dimension, 128, than number of latent channels, 256.

Add AttnDownEncoderBlock2D and AttnUpDecoderBlock2D to the up and down
unet block helpers. VQ-diffusion's VQVAE uses those two block types.

* Changes for VQ-diffusion transformer

Modify attention.py so SpatialTransformer can be used for
VQ-diffusion's transformer.

SpatialTransformer:
- Can now operate over discrete inputs (classes of vector embeddings) as well as continuous.
- `in_channels` was made optional in the constructor so two locations where it was passed as a positional arg were moved to kwargs
- modified forward pass to take optional timestep embeddings

ImagePositionalEmbeddings:
- added to provide positional embeddings to discrete inputs for latent pixels

BasicTransformerBlock:
- norm layers were made configurable so that the VQ-diffusion could use AdaLayerNorm with timestep embeddings
- modified forward pass to take optional timestep embeddings

CrossAttention:
- now may optionally take a bias parameter for its query, key, and value linear layers

FeedForward:
- Internal layers are now configurable

ApproximateGELU:
- Activation function in VQ-diffusion's feedforward layer

AdaLayerNorm:
- Norm layer modified to incorporate timestep embeddings

* Add VQ-diffusion scheduler

* Add VQ-diffusion pipeline

* Add VQ-diffusion convert script to diffusers

* Add VQ-diffusion dummy objects

* Add VQ-diffusion markdown docs

* Add VQ-diffusion tests

* some renaming

* some fixes

* more renaming

* correct

* fix typo

* correct weights

* finalize

* fix tests

* Apply suggestions from code review

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* finish

* finish

* up

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-11-03 16:10:28 +01:00
Revist d38c804320
feat: add repaint (#974)
* feat: add repaint

* fix: fix quality check with `make fix-copies`

* fix: remove old unnecessary arg

* chore: change default to DDPM (looks better in experiments)

* ".to(device)" changed to "device="

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* make generator device-specific

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* make generator device-specific and change shape

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* fix: add preprocessing for image and mask

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* fix: update test

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* Update src/diffusers/pipelines/repaint/pipeline_repaint.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Add docs and examples

* Fix toctree

Co-authored-by: fja <fja@zurich.ibm.com>
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-11-03 15:42:46 +01:00
Anton Lozhkov 4a38166afe
Allow saving `None` pipeline components (#1118)
* Allow saving `None` pipeline components

* support flax as well

* style
2022-11-03 15:41:33 +01:00
Anton Lozhkov 0edf9ca082
Fix hub-dependent tests for PRs (#1119)
* Remove the hub token

* replace repos

* style
2022-11-03 15:24:32 +01:00
Patrick von Platen c39a511b5f
[Loading] Ignore unneeded files (#1107)
* [Loading] Ignore unneeded files

* up
2022-11-02 19:20:42 +01:00
Grigory Sizov 5cd29d623a
Fix tests for equivalence of DDIM and DDPM pipelines (#1069)
* Fix equality test for ddim and ddpm

* add docs for use_clipped_model_output in DDIM

* fix inline comment

* reorder imports in test_pipelines.py

* Ignore use_clipped_model_output if scheduler doesn't take it
2022-11-02 14:50:32 +01:00
Anton Lozhkov 4e59bcc680
[CI] Framework and hardware-specific CI tests (#997)
* [WIP][CI] Framework and hardware-specific docker images for CI tests

* username

* fix cpu

* try out the image

* push latest

* update workspace

* no root isolation for actions

* add a flax image

* flax and onnx matrix

* fix runners

* add reports

* onnxruntime image

* retry tpu

* fix

* fix

* build onnxruntime

* naming

* onnxruntime-gpu image

* onnxruntime-gpu image, slow tests

* latest jax version

* trigger flax

* run flax tests in one thread

* fast flax tests on cpu

* fast flax tests on cpu

* trigger slow tests

* rebuild torch cuda

* force cuda provider

* fix onnxruntime tests

* trigger slow

* don't specify gpu for tpu

* optimize

* memory limit

* fix flax tests

* disable docker cache
2022-11-02 14:07:07 +01:00
Lewington-pitsos 8ee21915bf
Integration tests precision improvement for inpainting (#1052)
* improve test precision

get tests passing with greater precision using lewington images

* make old numpy load function a wrapper around a more flexible numpy loading function

* adhere to black formatting

* add more black formatting

* adhere to isort

* loosen precision and replace path

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-02 11:47:26 +01:00
Patrick von Platen 17c2c0600b
[Tests] Fix slow tests (#1087) 2022-10-31 18:59:58 +01:00
Patrick von Platen c18941b01a
[Better scheduler docs] Improve usage examples of schedulers (#890)
* [Better scheduler docs] Improve usage examples of schedulers

* finish

* fix warnings and add test

* finish

* more replacements

* adapt fast tests hf token

* correct more

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Integrate compatibility with euler

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-31 17:26:30 +01:00
hlky a1ea8c01c3
k-diffusion-euler (#1019)
* k-diffusion-euler

* make style make quality

* make fix-copies

* fix tests for euler a

* Update src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* Update src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* Update src/diffusers/schedulers/scheduling_euler_discrete.py

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* Update src/diffusers/schedulers/scheduling_euler_discrete.py

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* remove unused arg and method

* update doc

* quality

* make flake happy

* use logger instead of warn

* raise error instead of deprication

* don't require scipy

* pass generator in step

* fix tests

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update tests/test_scheduler.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* remove unused generator

* pass generator as extra_step_kwargs

* update tests

* pass generator as kwarg

* pass generator as kwarg

* quality

* fix test for lms

* fix tests

Co-authored-by: patil-suraj <surajp815@gmail.com>
Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-31 16:20:38 +01:00
Patrick von Platen 707b8684b3 fix slow test 2022-10-31 09:13:37 +00:00
Patrick von Platen 81b6fbf19d higher precision for vae 2022-10-28 18:19:06 +00:00
Patrick von Platen a7ae808ee2 increase tolerance 2022-10-28 17:50:22 +00:00
Patrick von Platen ea01a4c7f9 fix 2022-10-28 16:55:43 +00:00
Patrick von Platen cbbb29398a hot fix 2022-10-28 16:55:21 +00:00
Patrick von Platen d37f08da72
[Tests] no random latents anymore (#1045) 2022-10-28 18:52:25 +02:00
Patrick von Platen c4ef1efe46
[Tests] Better prints (#1043) 2022-10-28 17:38:31 +02:00
Patrick von Platen 8d6487f3cb
Fix some failing tests (#1041)
* up

* up

* up

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py

* Apply suggestions from code review
2022-10-28 17:05:00 +02:00
Patrick von Platen d2d9764f35
[Tests] Speed up slow tests (#1040)
* [Tests] Speed up slow tests

* Up

* up
2022-10-28 14:46:39 +02:00
Patrick von Platen a80480f0f2
[Tests] Improve unet / vae tests (#1018)
* improve tests

* up

* finish

* upload

* add init

* up

* finish vae

* finish

* reduce loading time with device_map

* remove device_map from CPU

* uP
2022-10-28 13:43:26 +02:00
Patrick von Platen 3be9fa97d6
[Accelerate model loading] Fix meta device and super low memory usage (#1016)
* [Accelerate model loading] Fix meta device and super low memory usage

* better naming
2022-10-27 12:11:42 +02:00
Pedro Cuenca 1d04e1b4de
Continuation of #942: additional float64 failure (#996)
* Add failing test for #940.

* Do not use torch.float64 in mps.

* style

* Temporarily skip add_noise for IPNDMScheduler.

Until #990 is addressed.

* Fix additional float64 error in mps.

* Improve add_noise test

* Slight edit – I think it's clearer this way.
2022-10-27 10:21:40 +02:00
Pi Esposito b2e2d1411c
minimal stable diffusion GPU memory usage with accelerate hooks (#850)
* add method to enable cuda with minimal gpu usage to stable diffusion

* add test to minimal cuda memory usage

* ensure all models but unet are onn torch.float32

* move to cpu_offload along with minor internal changes to make it work

* make it test against accelerate master branch

* coming back, its official: I don't know how to make it test againt the master branch from accelerate

* make it install accelerate from master on tests

* go back to accelerate>=0.11

* undo prettier formatting on yml files

* undo prettier formatting on yml files againn
2022-10-26 15:52:57 +02:00
Pedro Cuenca 0343d8f531
Do not use torch.float64 on the mps device (#942)
* Add failing test for #940.

* Do not use torch.float64 in mps.

* style

* Temporarily skip add_noise for IPNDMScheduler.

Until #990 is addressed.
2022-10-26 11:56:43 +02:00
Patrick von Platen 59f0ce82eb
[Dance Diffusion] Better naming (#981)
uP
2022-10-25 19:52:41 +02:00
Patrick von Platen 365ff8f76d
[Dance Diffusion] FP16 (#980)
* add in fp16

* up
2022-10-25 19:33:43 +02:00
Patrick von Platen 88fa6b7d68
[Dance Diffusion] Add dance diffusion (#803)
* start

* add more logic

* Update src/diffusers/models/unet_2d_condition_flax.py

* match weights

* up

* make model work

* making class more general, fixing missed file rename

* small fix

* make new conversion work

* up

* finalize conversion

* up

* first batch of variable renamings

* remove c and c_prev var names

* add mid and out block structure

* add pipeline

* up

* finish conversion

* finish

* upload

* more fixes

* Apply suggestions from code review

* add attr

* up

* uP

* up

* finish tests

* finish

* uP

* finish

* fix test

* up

* naming consistency in tests

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Nathan Lambert <nathan@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* remove hardcoded 16

* Remove bogus

* fix some stuff

* finish

* improve logging

* docs

* upload

Co-authored-by: Nathan Lambert <nol@berkeley.edu>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Nathan Lambert <nathan@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-10-25 18:39:25 +02:00
Kashif Rasul 240abddfbc
[Flax] added broadcast_to_shape_from_left helper and Scheduler tests (#864)
* added broadcast_to_shape_from_left helper

* initial tests

* fixed pndm tests

* shape required for pndm

* added require_flax

* fix style

* fix more imports

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-25 13:43:24 +02:00
Anton Lozhkov 2c82e0c4eb
Reorganize pipeline tests (#963)
* Reorganize pipeline tests

* fix vq
2022-10-24 16:34:01 +02:00
Kashif Rasul 9bca40296e
[MPS] fix mps failing tests (#934)
fix mps failing tests
2022-10-22 09:33:40 +02:00
Patrick von Platen 25dfd0f8dc
[Tests] Move stable diffusion into their own files (#936)
* [Tests] Move stable diffusion into their own files

* up
2022-10-21 12:49:52 +02:00
Anton Lozhkov 32bf4fdc43
Introduce the copy mechanism (#924)
* Introduce the copy mechanism

* init tests

* fix dummy tests

* with

* update copies tests
2022-10-20 20:26:03 +02:00
Suraj Patil 8be48507a0
fix test_components (#928) 2022-10-20 16:25:12 +02:00
Patrick von Platen db19a9d9d7
[DiffusionPipeline.from_pretrained] add warning when passing unused k… (#870)
[DiffusionPipeline.from_pretrained] add warning when passing unused kwargs
2022-10-20 13:30:01 +02:00
Patrick von Platen 83f8a5ff70
[Stable Diffusion] Add components function (#889)
* [Stable Diffusion] Add components function

* uP
2022-10-20 13:28:11 +02:00
Anton Lozhkov 89d124945a
ONNX supervised inpainting (#906)
* ONNX supervised inpainting

* sync with the torch pipeline

* fix concat

* update ref values

* back to 8 steps

* type fix

* make fix-copies
2022-10-19 17:03:31 +02:00
Patrick von Platen 46557121e6
finish tests (#909) 2022-10-19 16:36:51 +02:00
Suraj Patil b35d88c536
Stable diffusion inpainting. (#904)
* begin pipe

* add new pipeline

* add tests

* correct fast test

* up

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py

* Update tests/test_pipelines.py

* up

* up

* make style

* add fp16 test

* doc, comments

* up

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
2022-10-19 16:11:50 +02:00
Anton Lozhkov 8eb9d9703d
Improve ONNX img2img numpy handling, temporarily fix the tests (#899)
* [WIP] Onnx img2img determinism

* more numpy + seed

* numpy inpainting, tolerance

* revert test workflow
2022-10-19 11:26:32 +02:00
Žilvinas Ledas a9908ecfc1
Stable Diffusion image-to-image and inpaint using onnx. (#552)
* * Stabe Diffusion img2img using onnx.

* * Stabe Diffusion inpaint using onnx.

* Export vae_encoder, upgrade img2img, add test

* updated inpainting pipeline + test

* style

Co-authored-by: anton-l <anton@huggingface.co>
2022-10-18 17:44:01 +02:00
Anton Lozhkov 728a3f3ec1
Rename StableDiffusionOnnxPipeline -> OnnxStableDiffusionPipeline (#887)
Rename and deprecate
2022-10-18 09:14:30 +02:00
Pedro Cuenca 100e094cc9
Fix autoencoder test (#886)
Fix autoencoder test.
2022-10-17 21:47:13 +02:00
Anton Lozhkov cca59ce3a2
Add Apple M1 tests (#796)
* [CI] Add Apple M1 tests

* setup-python

* python build

* conda install

* remove branch

* only 3.8 is built for osx-arm

* try fetching prebuilt tokenizers

* use user cache

* update shells

* Reports and cleanup

* -> MPS

* Disable parallel tests

* Better naming

* investigate worker crash

* return xdist

* restart

* num_workers=2

* still crashing?

* faulthandler for segfaults

* faulthandler for segfaults

* remove restarts, stop on segfault

* torch version

* change installation order

* Use pre-RC version of PyTorch.

To be updated when it is released.

* Skip crashing test on MPS, add new one that works.

* Skip cuda tests in mps device.

* Actually use generator in test.

I think this was a typo.

* make style

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-17 20:27:30 +02:00
Anton Lozhkov 1d3234cbca
Remove the last of ["sample"] (#842) 2022-10-14 14:45:43 +02:00
Anton Lozhkov 52394b53e2
Bump to 0.6.0.dev0 (#831)
* Bump to 0.6.0.dev0

* Deprecate tensor_format and .samples

* style

* upd

* upd

* style

* sample -> images

* Update src/diffusers/schedulers/scheduling_ddpm.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_ddim.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_karras_ve.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_lms_discrete.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_pndm.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_sde_ve.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/schedulers/scheduling_sde_vp.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-14 13:43:52 +02:00
Patrick von Platen 1d51224403
[Flax] Complete tests (#828) 2022-10-13 18:18:32 +02:00
Patrick von Platen 7c2262640b
Align PT and Flax API - allow loading checkpoint from PyTorch configs (#827)
* up

* finish

* add more tests

* up

* up

* finish
2022-10-13 17:43:06 +02:00
Patrick von Platen e713346ad1
Give more customizable options for safety checker (#815)
* Give more customizable options for safety checker

* Apply suggestions from code review

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py

* Finish

* make style

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* up

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-13 15:52:26 +02:00
Anton Lozhkov 26c7df5d82
Fix type mismatch error, add tests for negative prompts (#823) 2022-10-13 15:45:42 +02:00
Patrick von Platen f1d4289be8
[Flax] Add test (#824) 2022-10-13 13:55:39 +02:00
Patrick von Platen db47b1e4d9
[Dummy imports] Better error message (#795)
* [Dummy imports] Better error message

* Test: load pipeline with LMS scheduler.

Fails with a cryptic message if scipy is not installed.

* Correct

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-12 14:41:16 +02:00
Patrick von Platen 6bc11782b7
[Img2Img] Fix batch size mismatch prompts vs. init images (#793)
* [Img2Img] Fix batch size mismatch prompts vs. init images

* Remove bogus folder

* fix

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-10-12 13:00:36 +02:00
Akash Pannu a124204490
Flax: Trickle down `norm_num_groups` (#789)
* pass norm_num_groups param and add tests

* set resnet_groups for FlaxUNetMidBlock2D

* fixed docstrings

* fixed typo

* using is_flax_available util and created require_flax decorator
2022-10-11 20:05:10 +02:00
Patrick von Platen 22963ed826
Fix gradient checkpointing test (#797)
* Fix gradient checkpointing test

* more tsets
2022-10-10 19:40:33 +02:00
Patrick von Platen fab17528da
[Low CPU memory] + device map (#772)
* add accelerate to load models with smaller memory footprint

* remove low_cpu_mem_usage as it is reduntant

* move accelerate init weights context to modelling utils

* add test to ensure results are the same when loading with accelerate

* add tests to ensure ram usage gets lower when using accelerate

* move accelerate logic to single snippet under modelling utils and remove it from configuration utils

* format code using to pass quality check

* fix imports with isor

* add accelerate to test extra deps

* only import accelerate if device_map is set to auto

* move accelerate availability check to diffusers import utils

* format code

* add device map to pipeline abstraction

* lint it to pass PR quality check

* fix class check to use accelerate when using diffusers ModelMixin subclasses

* use low_cpu_mem_usage in transformers if device_map is not available

* NoModuleLayer

* comment out tests

* up

* uP

* finish

* Update src/diffusers/pipelines/stable_diffusion/safety_checker.py

* finish

* uP

* make style

Co-authored-by: Pi Esposito <piero.skywalker@gmail.com>
2022-10-10 18:05:49 +02:00
Patrick von Platen f3983d16ee
[Tests] Fix tests (#774)
* Fix tests

* remove bogus file
2022-10-07 19:38:40 +02:00
Suraj Patil 92d7086366
[img2img, inpainting] fix fp16 inference (#769)
* handle dtype in vae and image2image pipeline

* fix inpaint in fp16

* dtype should be handled in add_noise

* style

* address review comments

* add simple fast tests to check fp16

* fix test name

* put mask in fp16
2022-10-07 17:01:51 +02:00
James R T e0fece2b26
Add final latent slice checks to SD pipeline intermediate state tests (#731)
This is to ensure that the final latent slices stay somewhat consistent as more changes are introduced into the library.

Signed-off-by: James R T <jamestiotio@gmail.com>

Signed-off-by: James R T <jamestiotio@gmail.com>
2022-10-07 15:50:20 +02:00
apolinario fdfa7c8f15
Change fp16 error to warning (#764)
* Swap fp16 error to warning

Also remove the associated test

* Formatting

* warn -> warning

* Update src/diffusers/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* make style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-10-07 10:31:52 +02:00
Patrick von Platen ae672d58ef
[Tests] Lower required memory for clip guided and fix super edge-case git pipeline module bug (#754)
* [Tests] Lower required memory

* fix

* up

* uP
2022-10-06 19:15:26 +02:00
Patrick von Platen 6613a8c7ff make CI happy 2022-10-06 17:16:01 +02:00
Patrick von Platen d9c449ea30
Custome Pipelines (#744)
* [Custom Pipelines]

* uP

* make style

* finish

* finish

* remove ipdb

* upload

* fix

* finish docs

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: apolinario <joaopaulo.passos@gmail.com>

* finish

* final uploads

* remove unnecessary test

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: apolinario <joaopaulo.passos@gmail.com>
2022-10-06 16:54:02 +02:00
Suraj Patil f3128c8788
Actually fix the grad ckpt test (#734)
* use_deterministic_algorithms  for grad ckpt test

* remove eval

* Apply suggestions from code review

* Update tests/test_models_unet.py
2022-10-06 16:04:00 +02:00