Commit Graph

175 Commits

Author SHA1 Message Date
Suraj Patil d144c46a59
[UNet2DConditionModel, UNet2DModel] pass norm_num_groups to all the blocks (#442)
* pass norm_num_groups to unet blocs and attention

* fix UNet2DConditionModel

* add norm_num_groups arg in vae

* add tests

* remove comment

* Apply suggestions from code review
2022-09-15 16:35:14 +02:00
Kashif Rasul b34be039f9
Karras VE, DDIM and DDPM flax schedulers (#508)
* beta never changes removed from state

* fix typos in docs

* removed unused var

* initial ddim flax scheduler

* import

* added dummy objects

* fix style

* fix typo

* docs

* fix typo in comment

* set return type

* added flax ddom

* fix style

* remake

* pass PRNG key as argument and split before use

* fix doc string

* use config

* added flax Karras VE scheduler

* make style

* fix dummy

* fix ndarray type annotation

* replace returns a new state

* added lms_discrete scheduler

* use self.config

* add_noise needs state

* use config

* use config

* docstring

* added flax score sde ve

* fix imports

* fix typos
2022-09-15 15:55:48 +02:00
Kashif Rasul da7e3994ad
Fix vae tests for cpu and gpu (#480) 2022-09-13 19:14:20 +02:00
Nathan Lambert b56f102765
Fix scheduler inference steps error with power of 3 (#466)
* initial attempt at solving

* fix pndm power of 3 inference_step

* add power of 3 test

* fix index in pndm test, remove ddim test

* add comments, change to round()
2022-09-13 09:48:33 -06:00
Pedro Cuenca e335f05fb1
Rename test_scheduler_outputs_equivalence in model tests. (#451) 2022-09-13 15:03:36 +02:00
Kashif Rasul f4781a0b27
update expected results of slow tests (#268)
* update expected results of slow tests

* relax sum and mean tests

* Print shapes when reporting exception

* formatting

* fix sentence

* relax test_stable_diffusion_fast_ddim for gpu fp16

* relax flakey tests on GPU

* added comment on large tolerences

* black

* format

* set scheduler seed

* added generator

* use np.isclose

* set num_inference_steps to 50

* fix dep. warning

* update expected_slice

* preprocess if image

* updated expected results

* updated expected from CI

* pass generator to VAE

* undo change back to orig

* use orignal

* revert back the expected on cpu

* revert back values for CPU

* more undo

* update result after using gen

* update mean

* set generator for mps

* update expected on CI server

* undo

* use new seed every time

* cpu manual seed

* reduce num_inference_steps

* style

* use generator for randn

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-12 15:49:39 +02:00
Patrick von Platen f55190b275
[Tests] Correct image folder tests (#427)
* [Tests] Correct image folder tests

* up
2022-09-08 16:45:29 +02:00
Anton Lozhkov 8d9c4a531b
[ONNX] Stable Diffusion exporter and pipeline (#399)
* initial export and design

* update imports

* custom prover, import fixes

* Update src/diffusers/onnx_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/onnx_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* remove push_to_hub

* Update src/diffusers/onnx_utils.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* remove torch_device

* numpify the rest of the pipeline

* torchify the safety checker

* revert tensor

* Code review suggestions + quality

* fix tests

* fix provider, add an end-to-end test

* style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-09-08 15:17:28 +02:00
Anton Lozhkov 7bcc873bb5
[Tests] Make image-based SD tests reproducible with fixed datasets (#424)
nicer datasets
2022-09-08 15:14:24 +02:00
Pedro Cuenca 5dda1735fd
Inference support for `mps` device (#355)
* Initial support for mps in Stable Diffusion pipeline.

* Initial "warmup" implementation when using mps.

* Make some deterministic tests pass with mps.

* Disable training tests when using mps.

* SD: generate latents in CPU then move to device.

This is especially important when using the mps device, because
generators are not supported there. See for example
https://github.com/pytorch/pytorch/issues/84288.

In addition, the other pipelines seem to use the same approach: generate
the random samples then move to the appropriate device.

After this change, generating an image in MPS produces the same result
as when using the CPU, if the same seed is used.

* Remove prints.

* Pass AutoencoderKL test_output_pretrained with mps.

Sampling from `posterior` must be done in CPU.

* Style

* Do not use torch.long for log op in mps device.

* Perform incompatible padding ops in CPU.

UNet tests now pass.
See https://github.com/pytorch/pytorch/issues/84535

* Style: fix import order.

* Remove unused symbols.

* Remove MPSWarmupMixin, do not apply automatically.

We do apply warmup in the tests, but not during normal use.
This adopts some PR suggestions by @patrickvonplaten.

* Add comment for mps fallback to CPU step.

* Add README_mps.md for mps installation and use.

* Apply `black` to modified files.

* Restrict README_mps to SD, show measures in table.

* Make PNDM indexing compatible with mps.

Addresses #239.

* Do not use float64 when using LDMScheduler.

Fixes #358.

* Fix typo identified by @patil-suraj

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Adapt example to new output style.

* Restore 1:1 results reproducibility with CompVis.

However, mps latents need to be generated in CPU because generators
don't work in the mps device.

* Move PyTorch nightly to requirements.

* Adapt `test_scheduler_outputs_equivalence` ton MPS.

* mps: skip training tests instead of ignoring silently.

* Make VQModel tests pass on mps.

* mps ddim tests: warmup, increase tolerance.

* ScoreSdeVeScheduler indexing made mps compatible.

* Make ldm pipeline tests pass using warmup.

* Style

* Simplify casting as suggested in PR.

* Add Known Issues to readme.

* `isort` import order.

* Remove _mps_warmup helpers from ModelMixin.

And just make changes to the tests.

* Skip tests using unittest decorator for consistency.

* Remove temporary var.

* Remove spurious blank space.

* Remove unused symbol.

* Remove README_mps.

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-08 13:37:36 +02:00
Patrick von Platen 5c4ea00de7
Efficient Attention (#366)
* up

* add tests

* correct

* up

* finish

* better naming

* Update README.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-09-06 18:06:47 +02:00
Anton Lozhkov 7a1229fa29
[Tests] Fix SD slow tests (#364)
move to fp16, update ddim
2022-09-06 17:01:04 +02:00
Patrick von Platen cc59b05635
[ModelOutputs] Replace dict outputs with Dict/Dataclass and allow to return tuples (#334)
* add outputs for models

* add for pipelines

* finish schedulers

* better naming

* adapt tests as well

* replace dict access with . access

* make schedulers works

* finish

* correct readme

* make  bcp compatible

* up

* small fix

* finish

* more fixes

* more fixes

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/models/vae.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Adapt model outputs

* Apply more suggestions

* finish examples

* correct

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-09-05 14:49:26 +02:00
Patrick von Platen 9b704f7688
[Img2Img2] Re-add K LMS scheduler (#340) 2022-09-03 12:19:58 +02:00
Anton Lozhkov 66fd3ec70d
[CI] try to fix GPU OOMs between tests and excessive tqdm logging (#323)
* Fix tqdm and OOM

* tqdm auto

* tqdm is still spamming try to disable it altogether

* rather just set the pipe config, to keep the global tqdm clean

* style
2022-09-02 13:18:49 +02:00
Anton Lozhkov 4724250980
Fix nondeterministic tests for GPU runs (#314)
* Fix nondeterministic tests for GPU runs

* force SD fast tests to the CPU
2022-09-01 15:25:39 +02:00
Patrick von Platen e8140304b9
[Tests] Add fast pipeline tests (#302)
* add fast tests

* Finish
2022-08-31 21:17:02 +02:00
Anton Lozhkov ab7857019a
Add missing auth tokens for two SD tests (#296) 2022-08-31 17:57:46 +02:00
Nouamane Tazi b64c522759
[PNDM Scheduler] format timesteps attrs to np arrays (#273)
* format timesteps attrs to np arrays in pndm scheduler
because lists don't get formatted to tensors in `self.set_format`

* convert to long type to use timesteps as indices for tensors

* add scheduler set_format test

* fix `_timesteps` type

* make style with black 22.3.0 and isort 5.10.1

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-08-31 14:12:08 +02:00
Patrick von Platen a4d5b59f13
Refactor Pipelines / Community pipelines and add better explanations. (#257)
* [Examples readme]

* Improve

* more

* save

* save

* save more

* up

* up

* Apply suggestions from code review

Co-authored-by: Nathan Lambert <nathan@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* up

* make deterministic

* up

* better

* up

* add generator to img2img pipe

* save

* make pipelines deterministic

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* apply all changes

* more correctnios

* finish

* improve table

* more fixes

* up

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* Update src/diffusers/pipelines/README.md

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* add better links

* fix more

* finish

Co-authored-by: Nathan Lambert <nathan@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-08-30 18:43:42 +02:00
hysts 5e84353eba
Refactor progress bar (#242)
* Refactor progress bar of pipeline __call__

* Make any tqdm configs available

* remove init

* add some tests

* remove file

* finish

* make style

* improve progress bar test

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-08-30 12:30:06 +02:00
Patrick von Platen 9e1b1ca49d
[Tests] Make sure tests are on GPU (#269)
* [Tests] Make sure tests are on GPU

* move more models

* speed up tests
2022-08-29 15:58:11 +02:00
Kashif Rasul 47893164ab
added test workflow and fixed failing test (#237)
* added test workflow and fixed failing test

* 4 decimal places
2022-08-24 13:46:53 +02:00
Kashif Rasul 102cabeb23
split tests_modeling_utils (#223)
* split tests_modeling_utils

* Fix SD tests .to(device)

* fix merge

* Fix style

Co-authored-by: anton-l <anton@huggingface.co>
2022-08-24 13:27:16 +02:00
anton-l 577a6a65d6 Fix SD tests .to(device) 2022-08-22 10:22:28 +02:00
Nathan Lambert 3f1861ee46
hotfix for pdnm test (#220) 2022-08-22 07:23:59 +02:00
Anton Lozhkov e30e1b89d0
Support one-string prompts and custom image size in LDM (#212)
* Support one-string prompts in LDM

* Add other features from SD too
2022-08-18 17:55:15 +02:00
Anton Lozhkov ed22b4fd07
Revive `make quality` (#203)
* Revive Make utils

* Add datasets for training too
2022-08-17 15:22:04 +02:00
Suraj Patil 3cd20d59d7
fix test_from_pretrained_hub_pass_model (#194)
init pipeline once
2022-08-17 13:58:18 +05:30
Pedro Cuenca 513f1fbfb0
Allow passing non-default modules to pipeline (#188)
* Allow passing non-default modules to pipeline.

Override modules are recognized and replaced in the pipeline. However,
no check is performed about mismatched classes yet. This is because the
override module is already instantiated and we have no library or class
name to compare against.

* up

* add test

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-08-16 17:25:25 +02:00
Anton Lozhkov d7b692083c
Add K-LMS scheduler from k-diffusion (#185)
* test LMS with LDM

* test LMS with LDM

* Interchangeable sigma and timestep. Added dummy objects

* Debug

* cuda generator

* Fix derivatives

* Update tests

* Rename Lms->LMS
2022-08-16 16:48:35 +02:00
Patrick von Platen 9070c394aa
[Naming] correct config naming of DDIM pipeline (#187) 2022-08-16 15:50:36 +02:00
Patrick von Platen 194ed794d8
[PNDM] Stable diffusion (#186)
* [PNDM] Stable diffusino

* finish
2022-08-16 15:33:13 +02:00
Patrick von Platen 051b34635f
[Half precision] Make sure half-precision is correct (#182)
* [Half precision] Make sure half-precision is correct

* Update src/diffusers/models/unet_2d.py

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py

* correct some tests

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* finalize

* finish

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-08-16 10:42:24 +02:00
Suraj Patil c25d8c905c
add tests for stable diffusion pipeline (#178)
add tests for sd pipeline
2022-08-14 18:51:02 +05:30
Suraj Patil 5782e0393d
Stable diffusion pipeline (#168)
* add stable diffusion pipeline

* get rid of multiple if/else

* batch_size is unused

* add type hints

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py

* fix some bugs

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-08-14 14:43:14 +02:00
Anton Lozhkov dd10da76a7
Add an alternative Karras et al. stochastic scheduler for VE models (#160)
* karras + VE, not flexible yet

* Fix inputs incompatibility with the original unet

* Roll back sigma scaling

* Apply suggestions from code review

* Old comment

* Fix doc
2022-08-09 15:58:30 +02:00
Suraj Patil a2090375ca
[VAE] fix the downsample block in Encoder. (#156)
* pass downsample_padding in encoder

* update tests
2022-08-06 17:36:07 +05:30
Patrick von Platen 3100bc9670
[Vae and AutoencoderKL] Final clean of LDM checkpoints (#137)
* [Vae and AutoencoderKL clean]

* save intermediate finished work

* more progress

* more progress

* finish modeling code

* save intermediate

* finish

* Correct tests
2022-07-28 10:14:34 +02:00
Anton Lozhkov e05f03ae41
Disable test_ddpm_ddim_equality_batched until resolved (#142)
disable test_ddpm_ddim_equality_batched
2022-07-28 09:29:29 +02:00
Anton Lozhkov 6c15636b0b
Add training and batched inference test for DDPM vs DDIM (#140)
* Add torch_device to the VE pipeline

* Mark the training test with slow
2022-07-27 15:01:56 +02:00
Patrick von Platen 5311f564ed
Final fixes (#118)
final fixes before release
2022-07-21 14:36:43 +02:00
Patrick von Platen 394243ce98 finish pndm sampler 2022-07-21 01:50:12 +00:00
Nathan Lambert fe98574622
fixing tests for numpy and make deterministic (ddpm) (#106)
* work in progress, fixing tests for numpy and make deterministic

* make tests pass via pytorch

* make pytorch == numpy test cleaner

* change default tensor format pndm --> pt
2022-07-21 02:24:59 +02:00
Patrick von Platen c5c9399610 correct paths for tests 2022-07-21 00:20:10 +00:00
Patrick von Platen 836f3f35c2
Rename pipelines (#115)
up
2022-07-21 01:39:46 +02:00
Patrick von Platen 9c3820d05a
Big Model Renaming (#109)
* up

* change model name

* renaming

* more changes

* up

* up

* up

* save checkpoint

* finish api / naming

* finish config renaming

* rename all weights

* finish really
2022-07-21 01:30:45 +02:00
Nathan Lambert 889aa6008c
PNDM API Updates, Tests Cleaning (#103)
* organize PNDM tests, begin API change

* clean timestep API PNDM

* update pipeline PNDM

* fix typo

* API clean round 2

* small nit
2022-07-20 12:47:39 -07:00
anton-l 6b275fca49 make PIL the default output type 2022-07-20 18:28:22 +02:00
Anton Lozhkov 1b42732ced
PIL-ify the pipeline outputs (#111) 2022-07-20 18:09:51 +02:00