Commit Graph

965 Commits

Author SHA1 Message Date
Anton Lozhkov d8572f20c7
Fix onnx tensor format (#654)
fix np onnx
2022-09-27 19:09:13 +02:00
Suraj Patil c0c98df9a1
[CLIPGuidedStableDiffusion] remove set_format from pipeline (#653)
remove set_format from pipeline
2022-09-27 18:56:47 +02:00
Kashif Rasul 85494e8818
[Pytorch] add dep. warning for pytorch schedulers (#651)
* add dep. warning for schedulers

* fix format
2022-09-27 18:39:34 +02:00
Suraj Patil 3304538229
[DDIM, DDPM] fix add_noise (#648)
fix add noise
2022-09-27 17:32:43 +02:00
Suraj Patil e5eed5235b
[dreambooth] update install section (#650)
update install section
2022-09-27 17:32:21 +02:00
Suraj Patil ac665b6484
[examples/dreambooth] don't pass tensor_format to scheduler. (#649)
don't pass tensor_format
2022-09-27 17:24:12 +02:00
Kashif Rasul bd8df2da89
[Pytorch] Pytorch only schedulers (#534)
* pytorch only schedulers

* fix style

* remove match_shape

* pytorch only ddpm

* remove SchedulerMixin

* remove numpy from karras_ve

* fix types

* remove numpy from lms_discrete

* remove numpy from pndm

* fix typo

* remove mixin and numpy from sde_vp and ve

* remove remaining tensor_format

* fix style

* sigmas has to be torch tensor

* removed set_format in readme

* remove set format from docs

* remove set_format from pipelines

* update tests

* fix typo

* continue to use mixin

* fix imports

* removed unsed imports

* match shape instead of assuming image shapes

* remove import typo

* update call to add_noise

* use math instead of numpy

* fix t_index

* removed commented out numpy tests

* timesteps needs to be discrete

* cast timesteps to int in flax scheduler too

* fix device mismatch issue

* small fix

* Update src/diffusers/schedulers/scheduling_pndm.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-27 15:27:34 +02:00
Zhenhuan Liu 3b747de845
Add training example for DreamBooth. (#554)
* Add training example for DreamBooth.

* Fix bugs.

* Update readme and default hyperparameters.

* Reformatting code with black.

* Update for multi-gpu trianing.

* Apply suggestions from code review

* improgve sampling

* fix autocast

* improve sampling more

* fix saving

* actuallu fix saving

* fix saving

* improve dataset

* fix collate fun

* fix collate_fn

* fix collate fn

* fix key name

* fix dataset

* fix collate fn

* concat batch in collate fn

* add grad ckpt

* add option for 8bit adam

* do two forward passes for prior preservation

* Revert "do two forward passes for prior preservation"

This reverts commit 661ca4677e6dccc4ad596c2ee6ca4baad4159e95.

* add option for prior_loss_weight

* add option for clip grad norm

* add more comments

* update readme

* update readme

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* add docstr for dataset

* update the saving logic

* Update examples/dreambooth/README.md

* remove unused imports

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-09-27 15:01:18 +02:00
Yih-Dar d886e49782
Fix `SpatialTransformer` (#578)
* Fix SpatialTransformer

* Fix SpatialTransformer

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-09-27 14:42:43 +02:00
Pedro Cuenca ab3fd671d7
Flax pipeline pndm (#583)
* WIP: flax FlaxDiffusionPipeline & FlaxStableDiffusionPipeline

* todo comment

* Fix imports

* Fix imports

* add dummies

* Fix empty init

* make pipeline work

* up

* Allow dtype to be overridden on model load.

This may be a temporary solution until #567 is addressed.

* Convert params to bfloat16 or fp16 after loading.

This deals with the weights, not the model.

* Use Flax schedulers (typing, docstring)

* PNDM: replace control flow with jax functions.

Otherwise jitting/parallelization don't work properly as they don't know
how to deal with traced objects.

I temporarily removed `step_prk`.

* Pass latents shape to scheduler set_timesteps()

PNDMScheduler uses it to reserve space, other schedulers will just
ignore it.

* Wrap model imports inside availability checks.

* Optionally return state in from_config.

Useful for Flax schedulers.

* Do not convert model weights to dtype.

* Re-enable PRK steps with functional implementation.

Values returned still not verified for correctness.

* Remove left over has_state var.

* make style

* Apply suggestion list -> tuple

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Apply suggestion list -> tuple

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Remove unused comments.

* Use zeros instead of empty.

Co-authored-by: Mishig Davaadorj <dmishig@gmail.com>
Co-authored-by: Mishig Davaadorj <mishig.davaadorj@coloradocollege.edu>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-09-27 14:16:11 +02:00
Pedro Cuenca c070e5f0c5
Remove inappropriate docstrings in LMS docstrings. (#634) 2022-09-27 13:22:05 +02:00
Ryan Russell b6945310c9
refactor: `custom_init_isort` readability fixups (#631)
Signed-off-by: Ryan Russell <git@ryanrussell.org>

Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-27 13:13:36 +02:00
Pedro Cuenca b671cb0920
Remove deprecated `torch_device` kwarg (#623)
* Remove deprecated `torch_device` kwarg.

* Remove unused imports.
2022-09-27 12:07:41 +02:00
Abdullah Alfaraj bb0c5d1595
Fix docs link to train_unconditional.py (#642)
the link points to an old location of the train_unconditional.py file
2022-09-27 11:23:09 +02:00
Yuta Hayashibe f7ebe56921
Warning for too long prompts in DiffusionPipelines (Resolve #447) (#472)
* Return encoded texts by DiffusionPipelines

* Updated README to show hot to use enoded_text_input

* Reverted examples in README.md

* Reverted all

* Warning for long prompts

* Fix bugs

* Formatted
2022-09-27 11:14:16 +02:00
Anton Lozhkov 57b70c599c
[CI] Fix onnxruntime installation order (#633) 2022-09-24 18:32:03 +02:00
Grigory Sizov 35e9209601
Fix formula for noise levels in Karras scheduler and tests (#627)
fix formula for noise levels in karras scheduler and tests
2022-09-24 18:24:08 +02:00
Ryan Russell d0aa899f0e
docs: `src/diffusers` readability improvements (#629)
* docs: `src/diffusers` readability improvements

Signed-off-by: Ryan Russell <git@ryanrussell.org>

* docs: `make style` lint

Signed-off-by: Ryan Russell <git@ryanrussell.org>

Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-24 16:21:28 +02:00
Pedro Cuenca 1e152030bd
Fix breaking error: "ort is not defined" (#626)
Fix "ort is not defined" issue.
2022-09-23 17:02:03 +02:00
cloudhan 8211b62227
Allow passing session_options for ORT backend (#620) 2022-09-23 15:28:31 +02:00
Ryan Russell ce31f83d8c
refactor: pipelines readability improvements (#622)
* refactor: pipelines readability improvements

Signed-off-by: Ryan Russell <git@ryanrussell.org>

* docs: remove todo comment from flax pipeline

Signed-off-by: Ryan Russell <git@ryanrussell.org>

Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-23 15:02:12 +02:00
Abdullah Alfaraj b00382e2a7
fix docs: change sample to images (#613)
the result of running the pipeline is stored in StableDiffusionPipelineOutput.images
2022-09-23 14:27:29 +02:00
Younes Belkada 8b0be93596
Flax documentation (#589)
* documenting `attention_flax.py` file

* documenting `embeddings_flax.py`

* documenting `unet_blocks_flax.py`

* Add new objs to doc page

* document `vae_flax.py`

* Apply suggestions from code review

* modify `unet_2d_condition_flax.py`

* make style

* Apply suggestions from code review

* make style

* Apply suggestions from code review

* fix indent

* fix typo

* fix indent unet

* Update src/diffusers/models/vae_flax.py

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Mishig Davaadorj <dmishig@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-09-23 13:24:16 +02:00
Ryan Russell df80ccf7de
docs: `.md` readability fixups (#619)
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-23 12:02:27 +02:00
Jonathan Whitaker 91db81894b
Adding pred_original_sample to SchedulerOutput for some samplers (#614)
* Adding pred_original_sample to SchedulerOutput of DDPMScheduler, DDIMScheduler, LMSDiscreteScheduler, KarrasVeScheduler step methods so we can access the predicted denoised outputs

* Gave DDPMScheduler, DDIMScheduler and LMSDiscreteScheduler their own output dataclasses so the default SchedulerOutput in scheduling_utils does not need pred_original_sample as an optional extra

* Reordered library imports to follow standard

* didnt get import order quite right apparently

* Forgot to change name of LMSDiscreteSchedulerOutput

* Aha, needed some extra libs for make style to fully work
2022-09-22 18:53:40 +02:00
Ryan Russell f149d037de
docs: fix `stochastic_karras_ve` ref (#618)
Signed-off-by: Ryan Russell <git@ryanrussell.org>

Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-22 18:36:29 +02:00
Suraj Patil e7120bae95
[UNet2DConditionModel] add gradient checkpointing (#461)
* add grad ckpt to downsample blocks

* make it work

* don't pass gradient_checkpointing to upsample block

* add tests for UNet2DConditionModel

* add test_gradient_checkpointing

* add gradient_checkpointing for up and down blocks

* add functions to enable and disable grad ckpt

* remove the forward argument

* better naming

* make supports_gradient_checkpointing private
2022-09-22 15:36:47 +02:00
Mishig Davaadorj 534512bedb
[flax] 'dtype' should not be part of self._internal_dict (#609) 2022-09-22 11:46:31 +02:00
Mishig Davaadorj 4b8880a306
Make flax from_pretrained work with local subfolder (#608) 2022-09-22 11:44:22 +02:00
Anton Lozhkov dd350c8afe
Handle the PIL.Image.Resampling deprecation (#588)
* Handle the PIL.Image.Resampling deprecation

* style
2022-09-22 00:02:14 +02:00
Ryan Russell 80183ca58b
docs: fix `Berkeley` ref (#611)
Signed-off-by: Ryan Russell <git@ryanrussell.org>

Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-09-21 22:55:32 +02:00
Anton Lozhkov 6bd005ebbe
[ONNX] Collate the external weights, speed up loading from the hub (#610) 2022-09-21 22:26:30 +02:00
Pedro Cuenca a9fdb3de9e
Return Flax scheduler state (#601)
* Optionally return state in from_config.

Useful for Flax schedulers.

* has_state is now a property, make check more strict.

I don't check the class is `SchedulerMixin` to prevent circular
dependencies. It should be enough that the class name starts with "Flax"
the object declares it "has_state" and the "create_state" exists too.

* Use state in pipeline from_pretrained.

* Make style
2022-09-21 22:25:27 +02:00
Anton Lozhkov e72f1a8a71
Add torchvision to training deps (#607) 2022-09-21 13:54:32 +02:00
Anton Lozhkov 4f1c989ffb
Add smoke tests for the training examples (#585)
* Add smoke tests for the training examples

* upd

* use a dummy dataset

* mark as slow

* cleanup

* Update test cases

* naming
2022-09-21 13:36:59 +02:00
Younes Belkada 3fc8ef7297
Replace `dropout_prob` by `dropout` in `vae` (#595)
replace `dropout_prob` by `dropout` in `vae`
2022-09-21 11:43:28 +02:00
Mishig Davaadorj 8685699392
Mv weights name consts to diffusers.utils (#605) 2022-09-21 11:30:14 +02:00
Mishig Davaadorj f810060006
Fix flax from_pretrained pytorch weight check (#603) 2022-09-21 11:17:15 +02:00
Pedro Cuenca fb2fbab10b
Allow dtype to be specified in Flax pipeline (#600)
* Fix typo in docstring.

* Allow dtype to be overridden on model load.

This may be a temporary solution until #567 is addressed.

* Create latents in float32

The denoising loop always computes the next step in float32, so this
would fail when using `bfloat16`.
2022-09-21 10:57:01 +02:00
Pedro Cuenca fb03aad8b4
Fix params replication when using the dummy checker (#602)
Fix params replication when sing the dummy checker.
2022-09-21 09:38:10 +02:00
Patrick von Platen 2345481c0e
[Flax] Fix unet and ddim scheduler (#594)
* [Flax] Fix unet and ddim scheduler

* correct

* finish
2022-09-20 23:29:09 +02:00
Mishig Davaadorj d934d3d795
FlaxDiffusionPipeline & FlaxStableDiffusionPipeline (#559)
* WIP: flax FlaxDiffusionPipeline & FlaxStableDiffusionPipeline

* todo comment

* Fix imports

* Fix imports

* add dummies

* Fix empty init

* make pipeline work

* up

* Use Flax schedulers (typing, docstring)

* Wrap model imports inside availability checks.

* more updates

* make sure flax is not broken

* make style

* more fixes

* up

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@latenitesoft.com>
2022-09-20 21:28:07 +02:00
Suraj Patil c6629e6f11
[flax safety checker] Use `FlaxPreTrainedModel` for saving/loading (#591)
* use FlaxPreTrainedModel for flax safety module

* fix name

* fix one more

* Apply suggestions from code review
2022-09-20 20:11:32 +02:00
Anton Lozhkov 8a6833b85c
Add the K-LMS scheduler to the inpainting pipeline + tests (#587)
* Add the K-LMS scheduler to the inpainting pipeline + tests

* Remove redundant casts
2022-09-20 19:10:44 +02:00
Anton Lozhkov a45dca077c
Fix BaseOutput initialization from dict (#570)
* Fix BaseOutput initialization from dict

* style

* Simplify post-init, add tests

* remove debug
2022-09-20 18:32:16 +02:00
Suraj Patil c01ec2d119
[FlaxAutoencoderKL] rename weights to align with PT (#584)
* rename weights to align with PT

* DiagonalGaussianDistribution => FlaxDiagonalGaussianDistribution

* fix name
2022-09-20 13:04:16 +02:00
Younes Belkada 0902449ef8
Add `from_pt` argument in `.from_pretrained` (#527)
* first commit:

- add `from_pt` argument in `from_pretrained` function
- add `modeling_flax_pytorch_utils.py` file

* small nit

- fix a small nit - to not enter in the second if condition

* major changes

- modify FlaxUnet modules
- first conversion script
- more keys to be matched

* keys match

- now all keys match
- change module names for correct matching
- upsample module name changed

* working v1

- test pass with atol and rtol= `4e-02`

* replace unsued arg

* make quality

* add small docstring

* add more comments

- add TODO for embedding layers

* small change

- use `jnp.expand_dims` for converting `timesteps` in case it is a 0-dimensional array

* add more conditions on conversion

- add better test to check for keys conversion

* make shapes consistent

- output `img_w x img_h x n_channels` from the VAE

* Revert "make shapes consistent"

This reverts commit 4cad1aeb4aeb224402dad13c018a5d42e96267f6.

* fix unet shape

- channels first!
2022-09-20 12:39:25 +02:00
Yuta Hayashibe ca74951323
Fix typos (#568)
* Fix a setting bug

* Fix typos

* Reverted params to parms
2022-09-19 21:58:41 +02:00
Yih-Dar 84616b5de5
Fix `CrossAttention._sliced_attention` (#563)
* Fix CrossAttention._sliced_attention

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-09-19 18:07:32 +02:00
Suraj Patil 8d36d5adb1
Update clip_guided_stable_diffusion.py 2022-09-19 18:03:00 +02:00