* pytorch only schedulers
* fix style
* remove match_shape
* pytorch only ddpm
* remove SchedulerMixin
* remove numpy from karras_ve
* fix types
* remove numpy from lms_discrete
* remove numpy from pndm
* fix typo
* remove mixin and numpy from sde_vp and ve
* remove remaining tensor_format
* fix style
* sigmas has to be torch tensor
* removed set_format in readme
* remove set format from docs
* remove set_format from pipelines
* update tests
* fix typo
* continue to use mixin
* fix imports
* removed unsed imports
* match shape instead of assuming image shapes
* remove import typo
* update call to add_noise
* use math instead of numpy
* fix t_index
* removed commented out numpy tests
* timesteps needs to be discrete
* cast timesteps to int in flax scheduler too
* fix device mismatch issue
* small fix
* Update src/diffusers/schedulers/scheduling_pndm.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* WIP: flax FlaxDiffusionPipeline & FlaxStableDiffusionPipeline
* todo comment
* Fix imports
* Fix imports
* add dummies
* Fix empty init
* make pipeline work
* up
* Allow dtype to be overridden on model load.
This may be a temporary solution until #567 is addressed.
* Convert params to bfloat16 or fp16 after loading.
This deals with the weights, not the model.
* Use Flax schedulers (typing, docstring)
* PNDM: replace control flow with jax functions.
Otherwise jitting/parallelization don't work properly as they don't know
how to deal with traced objects.
I temporarily removed `step_prk`.
* Pass latents shape to scheduler set_timesteps()
PNDMScheduler uses it to reserve space, other schedulers will just
ignore it.
* Wrap model imports inside availability checks.
* Optionally return state in from_config.
Useful for Flax schedulers.
* Do not convert model weights to dtype.
* Re-enable PRK steps with functional implementation.
Values returned still not verified for correctness.
* Remove left over has_state var.
* make style
* Apply suggestion list -> tuple
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Apply suggestion list -> tuple
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Remove unused comments.
* Use zeros instead of empty.
Co-authored-by: Mishig Davaadorj <dmishig@gmail.com>
Co-authored-by: Mishig Davaadorj <mishig.davaadorj@coloradocollege.edu>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Return encoded texts by DiffusionPipelines
* Updated README to show hot to use enoded_text_input
* Reverted examples in README.md
* Reverted all
* Warning for long prompts
* Fix bugs
* Formatted
* docs: `src/diffusers` readability improvements
Signed-off-by: Ryan Russell <git@ryanrussell.org>
* docs: `make style` lint
Signed-off-by: Ryan Russell <git@ryanrussell.org>
Signed-off-by: Ryan Russell <git@ryanrussell.org>
* refactor: pipelines readability improvements
Signed-off-by: Ryan Russell <git@ryanrussell.org>
* docs: remove todo comment from flax pipeline
Signed-off-by: Ryan Russell <git@ryanrussell.org>
Signed-off-by: Ryan Russell <git@ryanrussell.org>
* Adding pred_original_sample to SchedulerOutput of DDPMScheduler, DDIMScheduler, LMSDiscreteScheduler, KarrasVeScheduler step methods so we can access the predicted denoised outputs
* Gave DDPMScheduler, DDIMScheduler and LMSDiscreteScheduler their own output dataclasses so the default SchedulerOutput in scheduling_utils does not need pred_original_sample as an optional extra
* Reordered library imports to follow standard
* didnt get import order quite right apparently
* Forgot to change name of LMSDiscreteSchedulerOutput
* Aha, needed some extra libs for make style to fully work
* add grad ckpt to downsample blocks
* make it work
* don't pass gradient_checkpointing to upsample block
* add tests for UNet2DConditionModel
* add test_gradient_checkpointing
* add gradient_checkpointing for up and down blocks
* add functions to enable and disable grad ckpt
* remove the forward argument
* better naming
* make supports_gradient_checkpointing private
* Optionally return state in from_config.
Useful for Flax schedulers.
* has_state is now a property, make check more strict.
I don't check the class is `SchedulerMixin` to prevent circular
dependencies. It should be enough that the class name starts with "Flax"
the object declares it "has_state" and the "create_state" exists too.
* Use state in pipeline from_pretrained.
* Make style
* Fix typo in docstring.
* Allow dtype to be overridden on model load.
This may be a temporary solution until #567 is addressed.
* Create latents in float32
The denoising loop always computes the next step in float32, so this
would fail when using `bfloat16`.
* WIP: flax FlaxDiffusionPipeline & FlaxStableDiffusionPipeline
* todo comment
* Fix imports
* Fix imports
* add dummies
* Fix empty init
* make pipeline work
* up
* Use Flax schedulers (typing, docstring)
* Wrap model imports inside availability checks.
* more updates
* make sure flax is not broken
* make style
* more fixes
* up
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@latenitesoft.com>
* first commit:
- add `from_pt` argument in `from_pretrained` function
- add `modeling_flax_pytorch_utils.py` file
* small nit
- fix a small nit - to not enter in the second if condition
* major changes
- modify FlaxUnet modules
- first conversion script
- more keys to be matched
* keys match
- now all keys match
- change module names for correct matching
- upsample module name changed
* working v1
- test pass with atol and rtol= `4e-02`
* replace unsued arg
* make quality
* add small docstring
* add more comments
- add TODO for embedding layers
* small change
- use `jnp.expand_dims` for converting `timesteps` in case it is a 0-dimensional array
* add more conditions on conversion
- add better test to check for keys conversion
* make shapes consistent
- output `img_w x img_h x n_channels` from the VAE
* Revert "make shapes consistent"
This reverts commit 4cad1aeb4aeb224402dad13c018a5d42e96267f6.
* fix unet shape
- channels first!